2025-09-20 08:54:32.136661 | Job console starting 2025-09-20 08:54:32.157727 | Updating git repos 2025-09-20 08:54:32.229900 | Cloning repos into workspace 2025-09-20 08:54:32.500440 | Restoring repo states 2025-09-20 08:54:32.518293 | Merging changes 2025-09-20 08:54:32.518312 | Checking out repos 2025-09-20 08:54:32.892467 | Preparing playbooks 2025-09-20 08:54:33.498302 | Running Ansible setup 2025-09-20 08:54:37.640750 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-20 08:54:38.366063 | 2025-09-20 08:54:38.366224 | PLAY [Base pre] 2025-09-20 08:54:38.382778 | 2025-09-20 08:54:38.383005 | TASK [Setup log path fact] 2025-09-20 08:54:38.416796 | orchestrator | ok 2025-09-20 08:54:38.436152 | 2025-09-20 08:54:38.436280 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-20 08:54:38.468436 | orchestrator | ok 2025-09-20 08:54:38.480448 | 2025-09-20 08:54:38.480555 | TASK [emit-job-header : Print job information] 2025-09-20 08:54:38.534530 | # Job Information 2025-09-20 08:54:38.534777 | Ansible Version: 2.16.14 2025-09-20 08:54:38.534828 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-20 08:54:38.534901 | Pipeline: post 2025-09-20 08:54:38.534934 | Executor: 521e9411259a 2025-09-20 08:54:38.534963 | Triggered by: https://github.com/osism/testbed/commit/f39fa9ed7b5db05d8a560e94d40d0095e1db1ee2 2025-09-20 08:54:38.534994 | Event ID: 68d89ccc-95ff-11f0-8b41-96243eda7b0e 2025-09-20 08:54:38.543381 | 2025-09-20 08:54:38.543504 | LOOP [emit-job-header : Print node information] 2025-09-20 08:54:38.680342 | orchestrator | ok: 2025-09-20 08:54:38.680584 | orchestrator | # Node Information 2025-09-20 08:54:38.680624 | orchestrator | Inventory Hostname: orchestrator 2025-09-20 08:54:38.680649 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-20 08:54:38.680671 | orchestrator | Username: zuul-testbed02 2025-09-20 08:54:38.680692 | orchestrator | Distro: Debian 12.12 2025-09-20 08:54:38.680749 | orchestrator | Provider: static-testbed 2025-09-20 08:54:38.680775 | orchestrator | Region: 2025-09-20 08:54:38.680797 | orchestrator | Label: testbed-orchestrator 2025-09-20 08:54:38.680818 | orchestrator | Product Name: OpenStack Nova 2025-09-20 08:54:38.680838 | orchestrator | Interface IP: 81.163.193.140 2025-09-20 08:54:38.701887 | 2025-09-20 08:54:38.701999 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-20 08:54:39.171610 | orchestrator -> localhost | changed 2025-09-20 08:54:39.188152 | 2025-09-20 08:54:39.188321 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-20 08:54:40.216345 | orchestrator -> localhost | changed 2025-09-20 08:54:40.230651 | 2025-09-20 08:54:40.230813 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-20 08:54:40.518284 | orchestrator -> localhost | ok 2025-09-20 08:54:40.525569 | 2025-09-20 08:54:40.525682 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-20 08:54:40.554535 | orchestrator | ok 2025-09-20 08:54:40.570849 | orchestrator | included: /var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-20 08:54:40.578854 | 2025-09-20 08:54:40.578956 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-20 08:54:41.610901 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-20 08:54:41.611243 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/work/c03789dedb7743b89e40ae39dfe93df5_id_rsa 2025-09-20 08:54:41.611303 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/work/c03789dedb7743b89e40ae39dfe93df5_id_rsa.pub 2025-09-20 08:54:41.611341 | orchestrator -> localhost | The key fingerprint is: 2025-09-20 08:54:41.611374 | orchestrator -> localhost | SHA256:WrZdFUIWE76aPp4BcaLNvwfDg7kUmBQVhB7V+SwMj9s zuul-build-sshkey 2025-09-20 08:54:41.611405 | orchestrator -> localhost | The key's randomart image is: 2025-09-20 08:54:41.611446 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-20 08:54:41.611476 | orchestrator -> localhost | | .*=o oBo. | 2025-09-20 08:54:41.611504 | orchestrator -> localhost | | + . oo o . | 2025-09-20 08:54:41.611532 | orchestrator -> localhost | | o +o=.o. . | 2025-09-20 08:54:41.611558 | orchestrator -> localhost | | ++o++ oo | 2025-09-20 08:54:41.611585 | orchestrator -> localhost | | . SB .o | 2025-09-20 08:54:41.611619 | orchestrator -> localhost | | +==E+ | 2025-09-20 08:54:41.611647 | orchestrator -> localhost | | ....*+ | 2025-09-20 08:54:41.611694 | orchestrator -> localhost | | ...+. | 2025-09-20 08:54:41.611741 | orchestrator -> localhost | | .=o | 2025-09-20 08:54:41.611770 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-20 08:54:41.611849 | orchestrator -> localhost | ok: Runtime: 0:00:00.569950 2025-09-20 08:54:41.620953 | 2025-09-20 08:54:41.621072 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-20 08:54:41.652173 | orchestrator | ok 2025-09-20 08:54:41.664345 | orchestrator | included: /var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-20 08:54:41.674421 | 2025-09-20 08:54:41.674523 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-20 08:54:41.697888 | orchestrator | skipping: Conditional result was False 2025-09-20 08:54:41.705519 | 2025-09-20 08:54:41.705624 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-20 08:54:42.299372 | orchestrator | changed 2025-09-20 08:54:42.308961 | 2025-09-20 08:54:42.309085 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-20 08:54:42.584377 | orchestrator | ok 2025-09-20 08:54:42.591872 | 2025-09-20 08:54:42.591986 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-20 08:54:43.019870 | orchestrator | ok 2025-09-20 08:54:43.029801 | 2025-09-20 08:54:43.029939 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-20 08:54:43.452654 | orchestrator | ok 2025-09-20 08:54:43.460776 | 2025-09-20 08:54:43.460903 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-20 08:54:43.484630 | orchestrator | skipping: Conditional result was False 2025-09-20 08:54:43.491244 | 2025-09-20 08:54:43.491345 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-20 08:54:43.932148 | orchestrator -> localhost | changed 2025-09-20 08:54:43.946574 | 2025-09-20 08:54:43.946693 | TASK [add-build-sshkey : Add back temp key] 2025-09-20 08:54:44.266245 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/work/c03789dedb7743b89e40ae39dfe93df5_id_rsa (zuul-build-sshkey) 2025-09-20 08:54:44.266815 | orchestrator -> localhost | ok: Runtime: 0:00:00.009979 2025-09-20 08:54:44.282925 | 2025-09-20 08:54:44.283104 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-20 08:54:44.698143 | orchestrator | ok 2025-09-20 08:54:44.704191 | 2025-09-20 08:54:44.704300 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-20 08:54:44.728852 | orchestrator | skipping: Conditional result was False 2025-09-20 08:54:44.777638 | 2025-09-20 08:54:44.777781 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-20 08:54:45.180507 | orchestrator | ok 2025-09-20 08:54:45.197772 | 2025-09-20 08:54:45.197900 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-20 08:54:45.237491 | orchestrator | ok 2025-09-20 08:54:45.245215 | 2025-09-20 08:54:45.245320 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-20 08:54:45.524275 | orchestrator -> localhost | ok 2025-09-20 08:54:45.539042 | 2025-09-20 08:54:45.539178 | TASK [validate-host : Collect information about the host] 2025-09-20 08:54:46.774402 | orchestrator | ok 2025-09-20 08:54:46.793157 | 2025-09-20 08:54:46.793286 | TASK [validate-host : Sanitize hostname] 2025-09-20 08:54:46.862463 | orchestrator | ok 2025-09-20 08:54:46.868542 | 2025-09-20 08:54:46.868664 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-20 08:54:47.388790 | orchestrator -> localhost | changed 2025-09-20 08:54:47.396820 | 2025-09-20 08:54:47.397033 | TASK [validate-host : Collect information about zuul worker] 2025-09-20 08:54:47.828069 | orchestrator | ok 2025-09-20 08:54:47.837181 | 2025-09-20 08:54:47.837906 | TASK [validate-host : Write out all zuul information for each host] 2025-09-20 08:54:48.358940 | orchestrator -> localhost | changed 2025-09-20 08:54:48.369633 | 2025-09-20 08:54:48.369774 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-20 08:54:48.659907 | orchestrator | ok 2025-09-20 08:54:48.667552 | 2025-09-20 08:54:48.667675 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-20 08:55:29.416082 | orchestrator | changed: 2025-09-20 08:55:29.416308 | orchestrator | .d..t...... src/ 2025-09-20 08:55:29.416344 | orchestrator | .d..t...... src/github.com/ 2025-09-20 08:55:29.416369 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-20 08:55:29.416391 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-20 08:55:29.416413 | orchestrator | RedHat.yml 2025-09-20 08:55:29.429362 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-20 08:55:29.429379 | orchestrator | RedHat.yml 2025-09-20 08:55:29.429431 | orchestrator | = 1.53.0"... 2025-09-20 08:55:39.679894 | orchestrator | 08:55:39.679 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-20 08:55:39.707304 | orchestrator | 08:55:39.707 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-20 08:55:40.143562 | orchestrator | 08:55:40.143 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-20 08:55:40.931136 | orchestrator | 08:55:40.930 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-20 08:55:41.323607 | orchestrator | 08:55:41.323 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-20 08:55:42.014156 | orchestrator | 08:55:42.013 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-20 08:55:42.078454 | orchestrator | 08:55:42.078 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-20 08:55:42.643967 | orchestrator | 08:55:42.643 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-20 08:55:42.644071 | orchestrator | 08:55:42.643 STDOUT terraform: Providers are signed by their developers. 2025-09-20 08:55:42.644080 | orchestrator | 08:55:42.643 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-20 08:55:42.644085 | orchestrator | 08:55:42.644 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-20 08:55:42.644181 | orchestrator | 08:55:42.644 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-20 08:55:42.644229 | orchestrator | 08:55:42.644 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-20 08:55:42.644268 | orchestrator | 08:55:42.644 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-20 08:55:42.644303 | orchestrator | 08:55:42.644 STDOUT terraform: you run "tofu init" in the future. 2025-09-20 08:55:42.644803 | orchestrator | 08:55:42.644 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-20 08:55:42.644890 | orchestrator | 08:55:42.644 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-20 08:55:42.644936 | orchestrator | 08:55:42.644 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-20 08:55:42.644957 | orchestrator | 08:55:42.644 STDOUT terraform: should now work. 2025-09-20 08:55:42.644999 | orchestrator | 08:55:42.644 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-20 08:55:42.645048 | orchestrator | 08:55:42.644 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-20 08:55:42.645108 | orchestrator | 08:55:42.645 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-20 08:55:42.760238 | orchestrator | 08:55:42.760 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-20 08:55:42.760336 | orchestrator | 08:55:42.760 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-20 08:55:42.969824 | orchestrator | 08:55:42.969 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-20 08:55:42.969893 | orchestrator | 08:55:42.969 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-20 08:55:42.969901 | orchestrator | 08:55:42.969 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-20 08:55:42.969905 | orchestrator | 08:55:42.969 STDOUT terraform: for this configuration. 2025-09-20 08:55:43.107816 | orchestrator | 08:55:43.107 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-20 08:55:43.107894 | orchestrator | 08:55:43.107 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-20 08:55:43.212040 | orchestrator | 08:55:43.211 STDOUT terraform: ci.auto.tfvars 2025-09-20 08:55:43.222124 | orchestrator | 08:55:43.219 STDOUT terraform: default_custom.tf 2025-09-20 08:55:43.344293 | orchestrator | 08:55:43.344 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-20 08:55:44.422753 | orchestrator | 08:55:44.422 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-20 08:55:44.958763 | orchestrator | 08:55:44.958 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-20 08:55:45.217449 | orchestrator | 08:55:45.217 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-20 08:55:45.217523 | orchestrator | 08:55:45.217 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-20 08:55:45.217530 | orchestrator | 08:55:45.217 STDOUT terraform:  + create 2025-09-20 08:55:45.217535 | orchestrator | 08:55:45.217 STDOUT terraform:  <= read (data resources) 2025-09-20 08:55:45.217579 | orchestrator | 08:55:45.217 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-20 08:55:45.217750 | orchestrator | 08:55:45.217 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-20 08:55:45.217776 | orchestrator | 08:55:45.217 STDOUT terraform:  # (config refers to values not yet known) 2025-09-20 08:55:45.217808 | orchestrator | 08:55:45.217 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-20 08:55:45.217838 | orchestrator | 08:55:45.217 STDOUT terraform:  + checksum = (known after apply) 2025-09-20 08:55:45.217866 | orchestrator | 08:55:45.217 STDOUT terraform:  + created_at = (known after apply) 2025-09-20 08:55:45.217894 | orchestrator | 08:55:45.217 STDOUT terraform:  + file = (known after apply) 2025-09-20 08:55:45.217922 | orchestrator | 08:55:45.217 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.217961 | orchestrator | 08:55:45.217 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.217981 | orchestrator | 08:55:45.217 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-20 08:55:45.218027 | orchestrator | 08:55:45.217 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-20 08:55:45.218053 | orchestrator | 08:55:45.218 STDOUT terraform:  + most_recent = true 2025-09-20 08:55:45.218080 | orchestrator | 08:55:45.218 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.218109 | orchestrator | 08:55:45.218 STDOUT terraform:  + protected = (known after apply) 2025-09-20 08:55:45.218137 | orchestrator | 08:55:45.218 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.218164 | orchestrator | 08:55:45.218 STDOUT terraform:  + schema = (known after apply) 2025-09-20 08:55:45.218192 | orchestrator | 08:55:45.218 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-20 08:55:45.218221 | orchestrator | 08:55:45.218 STDOUT terraform:  + tags = (known after apply) 2025-09-20 08:55:45.218248 | orchestrator | 08:55:45.218 STDOUT terraform:  + updated_at = (known after apply) 2025-09-20 08:55:45.218263 | orchestrator | 08:55:45.218 STDOUT terraform:  } 2025-09-20 08:55:45.218474 | orchestrator | 08:55:45.218 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-20 08:55:45.218501 | orchestrator | 08:55:45.218 STDOUT terraform:  # (config refers to values not yet known) 2025-09-20 08:55:45.218548 | orchestrator | 08:55:45.218 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-20 08:55:45.218579 | orchestrator | 08:55:45.218 STDOUT terraform:  + checksum = (known after apply) 2025-09-20 08:55:45.218615 | orchestrator | 08:55:45.218 STDOUT terraform:  + created_at = (known after apply) 2025-09-20 08:55:45.218650 | orchestrator | 08:55:45.218 STDOUT terraform:  + file = (known after apply) 2025-09-20 08:55:45.218676 | orchestrator | 08:55:45.218 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.218704 | orchestrator | 08:55:45.218 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.218733 | orchestrator | 08:55:45.218 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-20 08:55:45.218761 | orchestrator | 08:55:45.218 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-20 08:55:45.218786 | orchestrator | 08:55:45.218 STDOUT terraform:  + most_recent = true 2025-09-20 08:55:45.218816 | orchestrator | 08:55:45.218 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.218843 | orchestrator | 08:55:45.218 STDOUT terraform:  + protected = (known after apply) 2025-09-20 08:55:45.218871 | orchestrator | 08:55:45.218 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.218898 | orchestrator | 08:55:45.218 STDOUT terraform:  + schema = (known after apply) 2025-09-20 08:55:45.218926 | orchestrator | 08:55:45.218 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-20 08:55:45.218954 | orchestrator | 08:55:45.218 STDOUT terraform:  + tags = (known after apply) 2025-09-20 08:55:45.218982 | orchestrator | 08:55:45.218 STDOUT terraform:  + updated_at = (known after apply) 2025-09-20 08:55:45.218997 | orchestrator | 08:55:45.218 STDOUT terraform:  } 2025-09-20 08:55:45.219037 | orchestrator | 08:55:45.219 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-20 08:55:45.219066 | orchestrator | 08:55:45.219 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-20 08:55:45.219102 | orchestrator | 08:55:45.219 STDOUT terraform:  + content = (known after apply) 2025-09-20 08:55:45.219137 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 08:55:45.219170 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 08:55:45.219204 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 08:55:45.219239 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 08:55:45.219274 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 08:55:45.219308 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 08:55:45.219331 | orchestrator | 08:55:45.219 STDOUT terraform:  + directory_permission = "0777" 2025-09-20 08:55:45.219355 | orchestrator | 08:55:45.219 STDOUT terraform:  + file_permission = "0644" 2025-09-20 08:55:45.219392 | orchestrator | 08:55:45.219 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-20 08:55:45.219452 | orchestrator | 08:55:45.219 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.219459 | orchestrator | 08:55:45.219 STDOUT terraform:  } 2025-09-20 08:55:45.219473 | orchestrator | 08:55:45.219 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-20 08:55:45.219497 | orchestrator | 08:55:45.219 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-20 08:55:45.219533 | orchestrator | 08:55:45.219 STDOUT terraform:  + content = (known after apply) 2025-09-20 08:55:45.219567 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 08:55:45.219601 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 08:55:45.219635 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 08:55:45.219669 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 08:55:45.219704 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 08:55:45.219742 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 08:55:45.219762 | orchestrator | 08:55:45.219 STDOUT terraform:  + directory_permission = "0777" 2025-09-20 08:55:45.219785 | orchestrator | 08:55:45.219 STDOUT terraform:  + file_permission = "0644" 2025-09-20 08:55:45.219815 | orchestrator | 08:55:45.219 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-20 08:55:45.219851 | orchestrator | 08:55:45.219 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.219858 | orchestrator | 08:55:45.219 STDOUT terraform:  } 2025-09-20 08:55:45.219884 | orchestrator | 08:55:45.219 STDOUT terraform:  # local_file.inventory will be created 2025-09-20 08:55:45.219908 | orchestrator | 08:55:45.219 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-20 08:55:45.219942 | orchestrator | 08:55:45.219 STDOUT terraform:  + content = (known after apply) 2025-09-20 08:55:45.219978 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 08:55:45.220017 | orchestrator | 08:55:45.219 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 08:55:45.220052 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 08:55:45.220085 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 08:55:45.220120 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 08:55:45.220153 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 08:55:45.220179 | orchestrator | 08:55:45.220 STDOUT terraform:  + directory_permission = "0777" 2025-09-20 08:55:45.220202 | orchestrator | 08:55:45.220 STDOUT terraform:  + file_permission = "0644" 2025-09-20 08:55:45.220232 | orchestrator | 08:55:45.220 STDOUT terraform:  + filename = "inventory.ci" 2025-09-20 08:55:45.220269 | orchestrator | 08:55:45.220 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.220276 | orchestrator | 08:55:45.220 STDOUT terraform:  } 2025-09-20 08:55:45.220305 | orchestrator | 08:55:45.220 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-20 08:55:45.220334 | orchestrator | 08:55:45.220 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-20 08:55:45.220366 | orchestrator | 08:55:45.220 STDOUT terraform:  + content = (sensitive value) 2025-09-20 08:55:45.220398 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-20 08:55:45.220443 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-20 08:55:45.220477 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-20 08:55:45.220510 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-20 08:55:45.220548 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-20 08:55:45.220582 | orchestrator | 08:55:45.220 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-20 08:55:45.220605 | orchestrator | 08:55:45.220 STDOUT terraform:  + directory_permission = "0700" 2025-09-20 08:55:45.220631 | orchestrator | 08:55:45.220 STDOUT terraform:  + file_permission = "0600" 2025-09-20 08:55:45.220659 | orchestrator | 08:55:45.220 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-20 08:55:45.220698 | orchestrator | 08:55:45.220 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.220704 | orchestrator | 08:55:45.220 STDOUT terraform:  } 2025-09-20 08:55:45.220731 | orchestrator | 08:55:45.220 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-20 08:55:45.220769 | orchestrator | 08:55:45.220 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-20 08:55:45.220790 | orchestrator | 08:55:45.220 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.220797 | orchestrator | 08:55:45.220 STDOUT terraform:  } 2025-09-20 08:55:45.220847 | orchestrator | 08:55:45.220 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-20 08:55:45.220900 | orchestrator | 08:55:45.220 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-20 08:55:45.220935 | orchestrator | 08:55:45.220 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.220958 | orchestrator | 08:55:45.220 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.220994 | orchestrator | 08:55:45.220 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.221029 | orchestrator | 08:55:45.220 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.221066 | orchestrator | 08:55:45.221 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.221110 | orchestrator | 08:55:45.221 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-20 08:55:45.221146 | orchestrator | 08:55:45.221 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.221165 | orchestrator | 08:55:45.221 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.221188 | orchestrator | 08:55:45.221 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.221213 | orchestrator | 08:55:45.221 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.221220 | orchestrator | 08:55:45.221 STDOUT terraform:  } 2025-09-20 08:55:45.221292 | orchestrator | 08:55:45.221 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-20 08:55:45.221335 | orchestrator | 08:55:45.221 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 08:55:45.221370 | orchestrator | 08:55:45.221 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.221394 | orchestrator | 08:55:45.221 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.221454 | orchestrator | 08:55:45.221 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.221488 | orchestrator | 08:55:45.221 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.221522 | orchestrator | 08:55:45.221 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.221569 | orchestrator | 08:55:45.221 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-20 08:55:45.221603 | orchestrator | 08:55:45.221 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.221626 | orchestrator | 08:55:45.221 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.221650 | orchestrator | 08:55:45.221 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.221674 | orchestrator | 08:55:45.221 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.221687 | orchestrator | 08:55:45.221 STDOUT terraform:  } 2025-09-20 08:55:45.221734 | orchestrator | 08:55:45.221 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-20 08:55:45.221778 | orchestrator | 08:55:45.221 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 08:55:45.221812 | orchestrator | 08:55:45.221 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.221837 | orchestrator | 08:55:45.221 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.221872 | orchestrator | 08:55:45.221 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.221907 | orchestrator | 08:55:45.221 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.221942 | orchestrator | 08:55:45.221 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.221988 | orchestrator | 08:55:45.221 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-20 08:55:45.222043 | orchestrator | 08:55:45.221 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.222059 | orchestrator | 08:55:45.222 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.222092 | orchestrator | 08:55:45.222 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.222120 | orchestrator | 08:55:45.222 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.222126 | orchestrator | 08:55:45.222 STDOUT terraform:  } 2025-09-20 08:55:45.222175 | orchestrator | 08:55:45.222 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-20 08:55:45.222220 | orchestrator | 08:55:45.222 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 08:55:45.222254 | orchestrator | 08:55:45.222 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.222277 | orchestrator | 08:55:45.222 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.222311 | orchestrator | 08:55:45.222 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.222349 | orchestrator | 08:55:45.222 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.222383 | orchestrator | 08:55:45.222 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.222435 | orchestrator | 08:55:45.222 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-20 08:55:45.222471 | orchestrator | 08:55:45.222 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.222491 | orchestrator | 08:55:45.222 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.222513 | orchestrator | 08:55:45.222 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.222538 | orchestrator | 08:55:45.222 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.222544 | orchestrator | 08:55:45.222 STDOUT terraform:  } 2025-09-20 08:55:45.222591 | orchestrator | 08:55:45.222 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-20 08:55:45.222636 | orchestrator | 08:55:45.222 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 08:55:45.222671 | orchestrator | 08:55:45.222 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.222696 | orchestrator | 08:55:45.222 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.222734 | orchestrator | 08:55:45.222 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.222767 | orchestrator | 08:55:45.222 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.222802 | orchestrator | 08:55:45.222 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.222846 | orchestrator | 08:55:45.222 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-20 08:55:45.222882 | orchestrator | 08:55:45.222 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.222903 | orchestrator | 08:55:45.222 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.222927 | orchestrator | 08:55:45.222 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.222951 | orchestrator | 08:55:45.222 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.222966 | orchestrator | 08:55:45.222 STDOUT terraform:  } 2025-09-20 08:55:45.223012 | orchestrator | 08:55:45.222 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-20 08:55:45.223058 | orchestrator | 08:55:45.223 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 08:55:45.223092 | orchestrator | 08:55:45.223 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.223115 | orchestrator | 08:55:45.223 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.223150 | orchestrator | 08:55:45.223 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.223185 | orchestrator | 08:55:45.223 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.223220 | orchestrator | 08:55:45.223 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.223262 | orchestrator | 08:55:45.223 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-20 08:55:45.223299 | orchestrator | 08:55:45.223 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.223319 | orchestrator | 08:55:45.223 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.223343 | orchestrator | 08:55:45.223 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.223369 | orchestrator | 08:55:45.223 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.223383 | orchestrator | 08:55:45.223 STDOUT terraform:  } 2025-09-20 08:55:45.223439 | orchestrator | 08:55:45.223 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-20 08:55:45.223484 | orchestrator | 08:55:45.223 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-20 08:55:45.223520 | orchestrator | 08:55:45.223 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.223545 | orchestrator | 08:55:45.223 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.223584 | orchestrator | 08:55:45.223 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.223615 | orchestrator | 08:55:45.223 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.223650 | orchestrator | 08:55:45.223 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.223694 | orchestrator | 08:55:45.223 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-20 08:55:45.223729 | orchestrator | 08:55:45.223 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.223750 | orchestrator | 08:55:45.223 STDOUT terraform:  + size = 80 2025-09-20 08:55:45.223774 | orchestrator | 08:55:45.223 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.223797 | orchestrator | 08:55:45.223 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.223812 | orchestrator | 08:55:45.223 STDOUT terraform:  } 2025-09-20 08:55:45.223857 | orchestrator | 08:55:45.223 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-20 08:55:45.223902 | orchestrator | 08:55:45.223 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.223939 | orchestrator | 08:55:45.223 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.223963 | orchestrator | 08:55:45.223 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.223998 | orchestrator | 08:55:45.223 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.224033 | orchestrator | 08:55:45.223 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.224072 | orchestrator | 08:55:45.224 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-20 08:55:45.224109 | orchestrator | 08:55:45.224 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.224129 | orchestrator | 08:55:45.224 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.224153 | orchestrator | 08:55:45.224 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.224177 | orchestrator | 08:55:45.224 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.224192 | orchestrator | 08:55:45.224 STDOUT terraform:  } 2025-09-20 08:55:45.224235 | orchestrator | 08:55:45.224 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-20 08:55:45.224276 | orchestrator | 08:55:45.224 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.224314 | orchestrator | 08:55:45.224 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.224339 | orchestrator | 08:55:45.224 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.224373 | orchestrator | 08:55:45.224 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.224407 | orchestrator | 08:55:45.224 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.224455 | orchestrator | 08:55:45.224 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-20 08:55:45.224486 | orchestrator | 08:55:45.224 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.224509 | orchestrator | 08:55:45.224 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.224537 | orchestrator | 08:55:45.224 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.224555 | orchestrator | 08:55:45.224 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.224562 | orchestrator | 08:55:45.224 STDOUT terraform:  } 2025-09-20 08:55:45.224607 | orchestrator | 08:55:45.224 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-20 08:55:45.224650 | orchestrator | 08:55:45.224 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.224685 | orchestrator | 08:55:45.224 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.224708 | orchestrator | 08:55:45.224 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.224745 | orchestrator | 08:55:45.224 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.224786 | orchestrator | 08:55:45.224 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.224825 | orchestrator | 08:55:45.224 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-20 08:55:45.224860 | orchestrator | 08:55:45.224 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.224881 | orchestrator | 08:55:45.224 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.224905 | orchestrator | 08:55:45.224 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.224930 | orchestrator | 08:55:45.224 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.224945 | orchestrator | 08:55:45.224 STDOUT terraform:  } 2025-09-20 08:55:45.224989 | orchestrator | 08:55:45.224 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-20 08:55:45.225031 | orchestrator | 08:55:45.224 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.225065 | orchestrator | 08:55:45.225 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.225088 | orchestrator | 08:55:45.225 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.225124 | orchestrator | 08:55:45.225 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.225174 | orchestrator | 08:55:45.225 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.225212 | orchestrator | 08:55:45.225 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-20 08:55:45.225247 | orchestrator | 08:55:45.225 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.225281 | orchestrator | 08:55:45.225 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.225287 | orchestrator | 08:55:45.225 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.225322 | orchestrator | 08:55:45.225 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.225327 | orchestrator | 08:55:45.225 STDOUT terraform:  } 2025-09-20 08:55:45.227016 | orchestrator | 08:55:45.225 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-20 08:55:45.227101 | orchestrator | 08:55:45.225 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.227116 | orchestrator | 08:55:45.225 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.227127 | orchestrator | 08:55:45.225 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.227137 | orchestrator | 08:55:45.225 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.227146 | orchestrator | 08:55:45.225 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.227169 | orchestrator | 08:55:45.225 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-20 08:55:45.227180 | orchestrator | 08:55:45.225 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.227213 | orchestrator | 08:55:45.225 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.227224 | orchestrator | 08:55:45.225 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.227234 | orchestrator | 08:55:45.225 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.227244 | orchestrator | 08:55:45.225 STDOUT terraform:  } 2025-09-20 08:55:45.227254 | orchestrator | 08:55:45.225 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-20 08:55:45.227264 | orchestrator | 08:55:45.225 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.227273 | orchestrator | 08:55:45.225 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.227283 | orchestrator | 08:55:45.225 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.227292 | orchestrator | 08:55:45.225 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.227302 | orchestrator | 08:55:45.225 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.227312 | orchestrator | 08:55:45.225 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-20 08:55:45.227321 | orchestrator | 08:55:45.225 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.227331 | orchestrator | 08:55:45.225 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.227340 | orchestrator | 08:55:45.225 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.227350 | orchestrator | 08:55:45.225 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.227360 | orchestrator | 08:55:45.225 STDOUT terraform:  } 2025-09-20 08:55:45.227369 | orchestrator | 08:55:45.225 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-20 08:55:45.227379 | orchestrator | 08:55:45.226 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.227388 | orchestrator | 08:55:45.226 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.227398 | orchestrator | 08:55:45.226 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.227407 | orchestrator | 08:55:45.226 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.227461 | orchestrator | 08:55:45.226 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.227472 | orchestrator | 08:55:45.226 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-20 08:55:45.227482 | orchestrator | 08:55:45.226 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.227492 | orchestrator | 08:55:45.226 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.227501 | orchestrator | 08:55:45.226 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.227511 | orchestrator | 08:55:45.226 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.227520 | orchestrator | 08:55:45.226 STDOUT terraform:  } 2025-09-20 08:55:45.227530 | orchestrator | 08:55:45.226 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-20 08:55:45.227553 | orchestrator | 08:55:45.226 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.227571 | orchestrator | 08:55:45.226 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.227581 | orchestrator | 08:55:45.226 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.227590 | orchestrator | 08:55:45.226 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.227599 | orchestrator | 08:55:45.226 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.227609 | orchestrator | 08:55:45.226 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-20 08:55:45.227619 | orchestrator | 08:55:45.226 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.227634 | orchestrator | 08:55:45.226 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.227644 | orchestrator | 08:55:45.226 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.227653 | orchestrator | 08:55:45.226 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.227663 | orchestrator | 08:55:45.226 STDOUT terraform:  } 2025-09-20 08:55:45.227673 | orchestrator | 08:55:45.226 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-20 08:55:45.227682 | orchestrator | 08:55:45.226 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-20 08:55:45.227692 | orchestrator | 08:55:45.226 STDOUT terraform:  + attachment = (known after apply) 2025-09-20 08:55:45.227701 | orchestrator | 08:55:45.226 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.227711 | orchestrator | 08:55:45.226 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.227720 | orchestrator | 08:55:45.226 STDOUT terraform:  + metadata = (known after apply) 2025-09-20 08:55:45.227730 | orchestrator | 08:55:45.226 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-20 08:55:45.227739 | orchestrator | 08:55:45.226 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.227749 | orchestrator | 08:55:45.226 STDOUT terraform:  + size = 20 2025-09-20 08:55:45.227758 | orchestrator | 08:55:45.226 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-20 08:55:45.227768 | orchestrator | 08:55:45.226 STDOUT terraform:  + volume_type = "ssd" 2025-09-20 08:55:45.227777 | orchestrator | 08:55:45.226 STDOUT terraform:  } 2025-09-20 08:55:45.227787 | orchestrator | 08:55:45.226 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-20 08:55:45.227796 | orchestrator | 08:55:45.227 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-20 08:55:45.227806 | orchestrator | 08:55:45.227 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.227815 | orchestrator | 08:55:45.227 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.227825 | orchestrator | 08:55:45.227 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.227835 | orchestrator | 08:55:45.227 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.227844 | orchestrator | 08:55:45.227 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.227860 | orchestrator | 08:55:45.227 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.227870 | orchestrator | 08:55:45.227 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.227879 | orchestrator | 08:55:45.227 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.227889 | orchestrator | 08:55:45.227 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-20 08:55:45.227899 | orchestrator | 08:55:45.227 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.227908 | orchestrator | 08:55:45.227 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.227918 | orchestrator | 08:55:45.227 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.227935 | orchestrator | 08:55:45.227 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.227945 | orchestrator | 08:55:45.227 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.227955 | orchestrator | 08:55:45.227 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.227964 | orchestrator | 08:55:45.227 STDOUT terraform:  + name = "testbed-manager" 2025-09-20 08:55:45.227974 | orchestrator | 08:55:45.227 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.227983 | orchestrator | 08:55:45.227 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.227993 | orchestrator | 08:55:45.227 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.228002 | orchestrator | 08:55:45.227 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.228012 | orchestrator | 08:55:45.227 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.228021 | orchestrator | 08:55:45.227 STDOUT terraform:  + user_data = (sensitive value) 2025-09-20 08:55:45.228031 | orchestrator | 08:55:45.227 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.228041 | orchestrator | 08:55:45.227 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.228050 | orchestrator | 08:55:45.227 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.228066 | orchestrator | 08:55:45.227 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.228076 | orchestrator | 08:55:45.227 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.228086 | orchestrator | 08:55:45.227 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.228095 | orchestrator | 08:55:45.227 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.228105 | orchestrator | 08:55:45.227 STDOUT terraform:  } 2025-09-20 08:55:45.228115 | orchestrator | 08:55:45.227 STDOUT terraform:  + network { 2025-09-20 08:55:45.228124 | orchestrator | 08:55:45.227 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.228134 | orchestrator | 08:55:45.227 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.228148 | orchestrator | 08:55:45.227 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.228158 | orchestrator | 08:55:45.227 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.228173 | orchestrator | 08:55:45.227 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.228187 | orchestrator | 08:55:45.227 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.228197 | orchestrator | 08:55:45.227 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.228207 | orchestrator | 08:55:45.227 STDOUT terraform:  } 2025-09-20 08:55:45.228217 | orchestrator | 08:55:45.227 STDOUT terraform:  } 2025-09-20 08:55:45.228227 | orchestrator | 08:55:45.228 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-20 08:55:45.228236 | orchestrator | 08:55:45.228 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 08:55:45.228246 | orchestrator | 08:55:45.228 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.228256 | orchestrator | 08:55:45.228 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.228266 | orchestrator | 08:55:45.228 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.228278 | orchestrator | 08:55:45.228 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.228288 | orchestrator | 08:55:45.228 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.228298 | orchestrator | 08:55:45.228 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.228308 | orchestrator | 08:55:45.228 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.228321 | orchestrator | 08:55:45.228 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.228333 | orchestrator | 08:55:45.228 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 08:55:45.228346 | orchestrator | 08:55:45.228 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.228582 | orchestrator | 08:55:45.228 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.228598 | orchestrator | 08:55:45.228 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.228608 | orchestrator | 08:55:45.228 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.228617 | orchestrator | 08:55:45.228 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.228627 | orchestrator | 08:55:45.228 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.228637 | orchestrator | 08:55:45.228 STDOUT terraform:  + name = "testbed-node-0" 2025-09-20 08:55:45.228646 | orchestrator | 08:55:45.228 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.228660 | orchestrator | 08:55:45.228 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.228669 | orchestrator | 08:55:45.228 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.228679 | orchestrator | 08:55:45.228 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.228689 | orchestrator | 08:55:45.228 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.228702 | orchestrator | 08:55:45.228 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 08:55:45.228715 | orchestrator | 08:55:45.228 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.228735 | orchestrator | 08:55:45.228 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.228748 | orchestrator | 08:55:45.228 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.228858 | orchestrator | 08:55:45.228 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.228872 | orchestrator | 08:55:45.228 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.228881 | orchestrator | 08:55:45.228 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.228895 | orchestrator | 08:55:45.228 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.228905 | orchestrator | 08:55:45.228 STDOUT terraform:  } 2025-09-20 08:55:45.228914 | orchestrator | 08:55:45.228 STDOUT terraform:  + network { 2025-09-20 08:55:45.228924 | orchestrator | 08:55:45.228 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.228943 | orchestrator | 08:55:45.228 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.228956 | orchestrator | 08:55:45.228 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.228969 | orchestrator | 08:55:45.228 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.229173 | orchestrator | 08:55:45.228 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.229187 | orchestrator | 08:55:45.228 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.229196 | orchestrator | 08:55:45.229 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.229206 | orchestrator | 08:55:45.229 STDOUT terraform:  } 2025-09-20 08:55:45.229216 | orchestrator | 08:55:45.229 STDOUT terraform:  } 2025-09-20 08:55:45.229225 | orchestrator | 08:55:45.229 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-20 08:55:45.229235 | orchestrator | 08:55:45.229 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 08:55:45.229249 | orchestrator | 08:55:45.229 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.229259 | orchestrator | 08:55:45.229 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.229269 | orchestrator | 08:55:45.229 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.229281 | orchestrator | 08:55:45.229 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.229291 | orchestrator | 08:55:45.229 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.229304 | orchestrator | 08:55:45.229 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.229660 | orchestrator | 08:55:45.229 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.229677 | orchestrator | 08:55:45.229 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.229687 | orchestrator | 08:55:45.229 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 08:55:45.229697 | orchestrator | 08:55:45.229 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.229706 | orchestrator | 08:55:45.229 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.229724 | orchestrator | 08:55:45.229 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.229734 | orchestrator | 08:55:45.229 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.229743 | orchestrator | 08:55:45.229 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.229753 | orchestrator | 08:55:45.229 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.229762 | orchestrator | 08:55:45.229 STDOUT terraform:  + name = "testbed-node-1" 2025-09-20 08:55:45.229772 | orchestrator | 08:55:45.229 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.229781 | orchestrator | 08:55:45.229 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.229795 | orchestrator | 08:55:45.229 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.229805 | orchestrator | 08:55:45.229 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.229817 | orchestrator | 08:55:45.229 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.229893 | orchestrator | 08:55:45.229 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 08:55:45.229906 | orchestrator | 08:55:45.229 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.229919 | orchestrator | 08:55:45.229 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.229932 | orchestrator | 08:55:45.229 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.230163 | orchestrator | 08:55:45.229 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.230180 | orchestrator | 08:55:45.229 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.230189 | orchestrator | 08:55:45.229 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.230199 | orchestrator | 08:55:45.230 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.230209 | orchestrator | 08:55:45.230 STDOUT terraform:  } 2025-09-20 08:55:45.230218 | orchestrator | 08:55:45.230 STDOUT terraform:  + network { 2025-09-20 08:55:45.230228 | orchestrator | 08:55:45.230 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.230237 | orchestrator | 08:55:45.230 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.230247 | orchestrator | 08:55:45.230 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.230261 | orchestrator | 08:55:45.230 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.230270 | orchestrator | 08:55:45.230 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.230280 | orchestrator | 08:55:45.230 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.230290 | orchestrator | 08:55:45.230 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.230299 | orchestrator | 08:55:45.230 STDOUT terraform:  } 2025-09-20 08:55:45.230312 | orchestrator | 08:55:45.230 STDOUT terraform:  } 2025-09-20 08:55:45.230322 | orchestrator | 08:55:45.230 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-20 08:55:45.230410 | orchestrator | 08:55:45.230 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 08:55:45.230449 | orchestrator | 08:55:45.230 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.230459 | orchestrator | 08:55:45.230 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.230472 | orchestrator | 08:55:45.230 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.230491 | orchestrator | 08:55:45.230 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.230504 | orchestrator | 08:55:45.230 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.230517 | orchestrator | 08:55:45.230 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.230614 | orchestrator | 08:55:45.230 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.230627 | orchestrator | 08:55:45.230 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.230637 | orchestrator | 08:55:45.230 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 08:55:45.230650 | orchestrator | 08:55:45.230 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.230660 | orchestrator | 08:55:45.230 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.230715 | orchestrator | 08:55:45.230 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.230729 | orchestrator | 08:55:45.230 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.230825 | orchestrator | 08:55:45.230 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.230838 | orchestrator | 08:55:45.230 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.230848 | orchestrator | 08:55:45.230 STDOUT terraform:  + name = "testbed-node-2" 2025-09-20 08:55:45.230857 | orchestrator | 08:55:45.230 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.230870 | orchestrator | 08:55:45.230 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.230882 | orchestrator | 08:55:45.230 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.230895 | orchestrator | 08:55:45.230 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.231021 | orchestrator | 08:55:45.230 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.231035 | orchestrator | 08:55:45.230 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 08:55:45.231045 | orchestrator | 08:55:45.230 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.231055 | orchestrator | 08:55:45.230 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.231068 | orchestrator | 08:55:45.230 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.231078 | orchestrator | 08:55:45.231 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.231090 | orchestrator | 08:55:45.231 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.231103 | orchestrator | 08:55:45.231 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.231237 | orchestrator | 08:55:45.231 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.231257 | orchestrator | 08:55:45.231 STDOUT terraform:  } 2025-09-20 08:55:45.231266 | orchestrator | 08:55:45.231 STDOUT terraform:  + network { 2025-09-20 08:55:45.231276 | orchestrator | 08:55:45.231 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.231286 | orchestrator | 08:55:45.231 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.231295 | orchestrator | 08:55:45.231 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.231309 | orchestrator | 08:55:45.231 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.231318 | orchestrator | 08:55:45.231 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.231328 | orchestrator | 08:55:45.231 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.231341 | orchestrator | 08:55:45.231 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.231350 | orchestrator | 08:55:45.231 STDOUT terraform:  } 2025-09-20 08:55:45.231367 | orchestrator | 08:55:45.231 STDOUT terraform:  } 2025-09-20 08:55:45.231441 | orchestrator | 08:55:45.231 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-20 08:55:45.231454 | orchestrator | 08:55:45.231 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 08:55:45.231467 | orchestrator | 08:55:45.231 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.231633 | orchestrator | 08:55:45.231 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.231645 | orchestrator | 08:55:45.231 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.231655 | orchestrator | 08:55:45.231 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.231665 | orchestrator | 08:55:45.231 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.231675 | orchestrator | 08:55:45.231 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.231685 | orchestrator | 08:55:45.231 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.231698 | orchestrator | 08:55:45.231 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.231708 | orchestrator | 08:55:45.231 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 08:55:45.231720 | orchestrator | 08:55:45.231 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.231733 | orchestrator | 08:55:45.231 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.232050 | orchestrator | 08:55:45.231 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.232063 | orchestrator | 08:55:45.231 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.232073 | orchestrator | 08:55:45.231 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.232083 | orchestrator | 08:55:45.231 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.232092 | orchestrator | 08:55:45.231 STDOUT terraform:  + name = "testbed-node-3" 2025-09-20 08:55:45.232102 | orchestrator | 08:55:45.231 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.232119 | orchestrator | 08:55:45.231 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.232129 | orchestrator | 08:55:45.231 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.232139 | orchestrator | 08:55:45.231 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.232149 | orchestrator | 08:55:45.231 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.232159 | orchestrator | 08:55:45.231 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 08:55:45.232182 | orchestrator | 08:55:45.232 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.232192 | orchestrator | 08:55:45.232 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.232202 | orchestrator | 08:55:45.232 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.232212 | orchestrator | 08:55:45.232 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.232221 | orchestrator | 08:55:45.232 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.232235 | orchestrator | 08:55:45.232 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.232244 | orchestrator | 08:55:45.232 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.232254 | orchestrator | 08:55:45.232 STDOUT terraform:  } 2025-09-20 08:55:45.232264 | orchestrator | 08:55:45.232 STDOUT terraform:  + network { 2025-09-20 08:55:45.232276 | orchestrator | 08:55:45.232 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.232286 | orchestrator | 08:55:45.232 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.232299 | orchestrator | 08:55:45.232 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.232628 | orchestrator | 08:55:45.232 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.233608 | orchestrator | 08:55:45.232 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.233618 | orchestrator | 08:55:45.232 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.233626 | orchestrator | 08:55:45.232 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.233634 | orchestrator | 08:55:45.232 STDOUT terraform:  } 2025-09-20 08:55:45.233642 | orchestrator | 08:55:45.232 STDOUT terraform:  } 2025-09-20 08:55:45.233650 | orchestrator | 08:55:45.232 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-20 08:55:45.233659 | orchestrator | 08:55:45.232 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 08:55:45.233667 | orchestrator | 08:55:45.232 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.233675 | orchestrator | 08:55:45.232 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.233687 | orchestrator | 08:55:45.232 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.233695 | orchestrator | 08:55:45.232 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.233703 | orchestrator | 08:55:45.232 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.233718 | orchestrator | 08:55:45.232 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.233726 | orchestrator | 08:55:45.232 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.233733 | orchestrator | 08:55:45.232 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.233741 | orchestrator | 08:55:45.232 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 08:55:45.233749 | orchestrator | 08:55:45.232 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.233757 | orchestrator | 08:55:45.232 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.233765 | orchestrator | 08:55:45.232 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.233773 | orchestrator | 08:55:45.232 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.233781 | orchestrator | 08:55:45.232 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.233789 | orchestrator | 08:55:45.232 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.233796 | orchestrator | 08:55:45.232 STDOUT terraform:  + name = "testbed-node-4" 2025-09-20 08:55:45.233804 | orchestrator | 08:55:45.232 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.233812 | orchestrator | 08:55:45.232 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.233820 | orchestrator | 08:55:45.232 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.233828 | orchestrator | 08:55:45.233 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.233836 | orchestrator | 08:55:45.233 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.233844 | orchestrator | 08:55:45.233 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 08:55:45.233851 | orchestrator | 08:55:45.233 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.233859 | orchestrator | 08:55:45.233 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.233867 | orchestrator | 08:55:45.233 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.233875 | orchestrator | 08:55:45.233 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.233883 | orchestrator | 08:55:45.233 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.233891 | orchestrator | 08:55:45.233 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.233899 | orchestrator | 08:55:45.233 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.233906 | orchestrator | 08:55:45.233 STDOUT terraform:  } 2025-09-20 08:55:45.233914 | orchestrator | 08:55:45.233 STDOUT terraform:  + network { 2025-09-20 08:55:45.233922 | orchestrator | 08:55:45.233 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.233930 | orchestrator | 08:55:45.233 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.233938 | orchestrator | 08:55:45.233 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.233946 | orchestrator | 08:55:45.233 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.233958 | orchestrator | 08:55:45.233 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.233966 | orchestrator | 08:55:45.233 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.233974 | orchestrator | 08:55:45.233 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.233982 | orchestrator | 08:55:45.233 STDOUT terraform:  } 2025-09-20 08:55:45.233990 | orchestrator | 08:55:45.233 STDOUT terraform:  } 2025-09-20 08:55:45.234008 | orchestrator | 08:55:45.233 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-20 08:55:45.234038 | orchestrator | 08:55:45.233 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-20 08:55:45.234046 | orchestrator | 08:55:45.233 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-20 08:55:45.234054 | orchestrator | 08:55:45.233 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-20 08:55:45.234062 | orchestrator | 08:55:45.233 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-20 08:55:45.234070 | orchestrator | 08:55:45.233 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.234078 | orchestrator | 08:55:45.233 STDOUT terraform:  + availability_zone = "nova" 2025-09-20 08:55:45.234086 | orchestrator | 08:55:45.233 STDOUT terraform:  + config_drive = true 2025-09-20 08:55:45.234094 | orchestrator | 08:55:45.233 STDOUT terraform:  + created = (known after apply) 2025-09-20 08:55:45.234102 | orchestrator | 08:55:45.233 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-20 08:55:45.234114 | orchestrator | 08:55:45.233 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-20 08:55:45.234126 | orchestrator | 08:55:45.233 STDOUT terraform:  + force_delete = false 2025-09-20 08:55:45.234134 | orchestrator | 08:55:45.233 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-20 08:55:45.234142 | orchestrator | 08:55:45.233 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.234150 | orchestrator | 08:55:45.233 STDOUT terraform:  + image_id = (known after apply) 2025-09-20 08:55:45.234158 | orchestrator | 08:55:45.233 STDOUT terraform:  + image_name = (known after apply) 2025-09-20 08:55:45.234166 | orchestrator | 08:55:45.233 STDOUT terraform:  + key_pair = "testbed" 2025-09-20 08:55:45.234174 | orchestrator | 08:55:45.233 STDOUT terraform:  + name = "testbed-node-5" 2025-09-20 08:55:45.234185 | orchestrator | 08:55:45.233 STDOUT terraform:  + power_state = "active" 2025-09-20 08:55:45.234193 | orchestrator | 08:55:45.234 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.234201 | orchestrator | 08:55:45.234 STDOUT terraform:  + security_groups = (known after apply) 2025-09-20 08:55:45.234209 | orchestrator | 08:55:45.234 STDOUT terraform:  + stop_before_destroy = false 2025-09-20 08:55:45.234217 | orchestrator | 08:55:45.234 STDOUT terraform:  + updated = (known after apply) 2025-09-20 08:55:45.234225 | orchestrator | 08:55:45.234 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-20 08:55:45.234236 | orchestrator | 08:55:45.234 STDOUT terraform:  + block_device { 2025-09-20 08:55:45.234249 | orchestrator | 08:55:45.234 STDOUT terraform:  + boot_index = 0 2025-09-20 08:55:45.234257 | orchestrator | 08:55:45.234 STDOUT terraform:  + delete_on_termination = false 2025-09-20 08:55:45.234267 | orchestrator | 08:55:45.234 STDOUT terraform:  + destination_type = "volume" 2025-09-20 08:55:45.234278 | orchestrator | 08:55:45.234 STDOUT terraform:  + multiattach = false 2025-09-20 08:55:45.234878 | orchestrator | 08:55:45.234 STDOUT terraform:  + source_type = "volume" 2025-09-20 08:55:45.235783 | orchestrator | 08:55:45.234 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.235957 | orchestrator | 08:55:45.234 STDOUT terraform:  } 2025-09-20 08:55:45.236047 | orchestrator | 08:55:45.234 STDOUT terraform:  + network { 2025-09-20 08:55:45.236252 | orchestrator | 08:55:45.234 STDOUT terraform:  + access_network = false 2025-09-20 08:55:45.236459 | orchestrator | 08:55:45.234 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-20 08:55:45.237439 | orchestrator | 08:55:45.234 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-20 08:55:45.237595 | orchestrator | 08:55:45.234 STDOUT terraform:  + mac = (known after apply) 2025-09-20 08:55:45.238030 | orchestrator | 08:55:45.234 STDOUT terraform:  + name = (known after apply) 2025-09-20 08:55:45.238040 | orchestrator | 08:55:45.234 STDOUT terraform:  + port = (known after apply) 2025-09-20 08:55:45.238047 | orchestrator | 08:55:45.234 STDOUT terraform:  + uuid = (known after apply) 2025-09-20 08:55:45.238054 | orchestrator | 08:55:45.234 STDOUT terraform:  } 2025-09-20 08:55:45.238060 | orchestrator | 08:55:45.234 STDOUT terraform:  } 2025-09-20 08:55:45.238072 | orchestrator | 08:55:45.234 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-20 08:55:45.238079 | orchestrator | 08:55:45.234 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-20 08:55:45.238085 | orchestrator | 08:55:45.234 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-20 08:55:45.238092 | orchestrator | 08:55:45.234 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238099 | orchestrator | 08:55:45.234 STDOUT terraform:  + name = "testbed" 2025-09-20 08:55:45.238105 | orchestrator | 08:55:45.234 STDOUT terraform:  + private_key = (sensitive value) 2025-09-20 08:55:45.238112 | orchestrator | 08:55:45.234 STDOUT terraform:  + public_key = (known after apply) 2025-09-20 08:55:45.238119 | orchestrator | 08:55:45.234 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238129 | orchestrator | 08:55:45.234 STDOUT terraform:  + user_id = (known after apply) 2025-09-20 08:55:45.238136 | orchestrator | 08:55:45.234 STDOUT terraform:  } 2025-09-20 08:55:45.238143 | orchestrator | 08:55:45.234 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-20 08:55:45.238150 | orchestrator | 08:55:45.234 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238157 | orchestrator | 08:55:45.234 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238164 | orchestrator | 08:55:45.234 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238176 | orchestrator | 08:55:45.234 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238183 | orchestrator | 08:55:45.234 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238190 | orchestrator | 08:55:45.234 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238197 | orchestrator | 08:55:45.234 STDOUT terraform:  } 2025-09-20 08:55:45.238203 | orchestrator | 08:55:45.234 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-20 08:55:45.238210 | orchestrator | 08:55:45.235 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238217 | orchestrator | 08:55:45.235 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238223 | orchestrator | 08:55:45.235 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238230 | orchestrator | 08:55:45.235 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238237 | orchestrator | 08:55:45.235 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238243 | orchestrator | 08:55:45.235 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238250 | orchestrator | 08:55:45.235 STDOUT terraform:  } 2025-09-20 08:55:45.238257 | orchestrator | 08:55:45.235 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-20 08:55:45.238263 | orchestrator | 08:55:45.235 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238270 | orchestrator | 08:55:45.235 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238277 | orchestrator | 08:55:45.235 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238283 | orchestrator | 08:55:45.235 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238290 | orchestrator | 08:55:45.235 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238297 | orchestrator | 08:55:45.235 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238303 | orchestrator | 08:55:45.235 STDOUT terraform:  } 2025-09-20 08:55:45.238310 | orchestrator | 08:55:45.235 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-20 08:55:45.238317 | orchestrator | 08:55:45.235 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238324 | orchestrator | 08:55:45.235 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238335 | orchestrator | 08:55:45.235 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238342 | orchestrator | 08:55:45.235 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238349 | orchestrator | 08:55:45.235 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238355 | orchestrator | 08:55:45.235 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238362 | orchestrator | 08:55:45.235 STDOUT terraform:  } 2025-09-20 08:55:45.238369 | orchestrator | 08:55:45.235 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-20 08:55:45.238380 | orchestrator | 08:55:45.235 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238387 | orchestrator | 08:55:45.235 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238397 | orchestrator | 08:55:45.235 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238405 | orchestrator | 08:55:45.235 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238411 | orchestrator | 08:55:45.235 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238429 | orchestrator | 08:55:45.235 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238436 | orchestrator | 08:55:45.235 STDOUT terraform:  } 2025-09-20 08:55:45.238443 | orchestrator | 08:55:45.235 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-20 08:55:45.238450 | orchestrator | 08:55:45.235 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238456 | orchestrator | 08:55:45.235 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238463 | orchestrator | 08:55:45.235 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238470 | orchestrator | 08:55:45.235 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238476 | orchestrator | 08:55:45.236 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238483 | orchestrator | 08:55:45.236 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238490 | orchestrator | 08:55:45.236 STDOUT terraform:  } 2025-09-20 08:55:45.238497 | orchestrator | 08:55:45.236 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-20 08:55:45.238504 | orchestrator | 08:55:45.236 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238510 | orchestrator | 08:55:45.236 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238517 | orchestrator | 08:55:45.236 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238524 | orchestrator | 08:55:45.236 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238531 | orchestrator | 08:55:45.236 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238537 | orchestrator | 08:55:45.236 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238544 | orchestrator | 08:55:45.236 STDOUT terraform:  } 2025-09-20 08:55:45.238551 | orchestrator | 08:55:45.236 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-20 08:55:45.238558 | orchestrator | 08:55:45.236 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238564 | orchestrator | 08:55:45.236 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238571 | orchestrator | 08:55:45.236 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238578 | orchestrator | 08:55:45.236 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238584 | orchestrator | 08:55:45.236 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238596 | orchestrator | 08:55:45.236 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238603 | orchestrator | 08:55:45.236 STDOUT terraform:  } 2025-09-20 08:55:45.238614 | orchestrator | 08:55:45.236 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-20 08:55:45.238621 | orchestrator | 08:55:45.236 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-20 08:55:45.238627 | orchestrator | 08:55:45.236 STDOUT terraform:  + device = (known after apply) 2025-09-20 08:55:45.238634 | orchestrator | 08:55:45.236 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238641 | orchestrator | 08:55:45.236 STDOUT terraform:  + instance_id = (known after apply) 2025-09-20 08:55:45.238647 | orchestrator | 08:55:45.236 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238654 | orchestrator | 08:55:45.236 STDOUT terraform:  + volume_id = (known after apply) 2025-09-20 08:55:45.238661 | orchestrator | 08:55:45.236 STDOUT terraform:  } 2025-09-20 08:55:45.238671 | orchestrator | 08:55:45.236 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-20 08:55:45.238679 | orchestrator | 08:55:45.236 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-20 08:55:45.238686 | orchestrator | 08:55:45.236 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-20 08:55:45.238692 | orchestrator | 08:55:45.236 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-20 08:55:45.238699 | orchestrator | 08:55:45.236 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238706 | orchestrator | 08:55:45.236 STDOUT terraform:  + port_id = (known after apply) 2025-09-20 08:55:45.238712 | orchestrator | 08:55:45.236 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238719 | orchestrator | 08:55:45.236 STDOUT terraform:  } 2025-09-20 08:55:45.238726 | orchestrator | 08:55:45.237 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-20 08:55:45.238733 | orchestrator | 08:55:45.237 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-20 08:55:45.238739 | orchestrator | 08:55:45.237 STDOUT terraform:  + address = (known after apply) 2025-09-20 08:55:45.238746 | orchestrator | 08:55:45.237 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.238753 | orchestrator | 08:55:45.237 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-20 08:55:45.238760 | orchestrator | 08:55:45.237 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.238766 | orchestrator | 08:55:45.237 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-20 08:55:45.238773 | orchestrator | 08:55:45.237 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238780 | orchestrator | 08:55:45.237 STDOUT terraform:  + pool = "public" 2025-09-20 08:55:45.238787 | orchestrator | 08:55:45.237 STDOUT terraform:  + port_id = (known after apply) 2025-09-20 08:55:45.238793 | orchestrator | 08:55:45.237 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238800 | orchestrator | 08:55:45.237 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.238811 | orchestrator | 08:55:45.237 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.238817 | orchestrator | 08:55:45.237 STDOUT terraform:  } 2025-09-20 08:55:45.238824 | orchestrator | 08:55:45.237 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-20 08:55:45.238831 | orchestrator | 08:55:45.237 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-20 08:55:45.238838 | orchestrator | 08:55:45.237 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.238844 | orchestrator | 08:55:45.237 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.238851 | orchestrator | 08:55:45.237 STDOUT terraform:  + availability_zone_hints = [ 2025-09-20 08:55:45.238858 | orchestrator | 08:55:45.237 STDOUT terraform:  + "nova", 2025-09-20 08:55:45.238864 | orchestrator | 08:55:45.237 STDOUT terraform:  ] 2025-09-20 08:55:45.238876 | orchestrator | 08:55:45.237 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-20 08:55:45.238882 | orchestrator | 08:55:45.237 STDOUT terraform:  + external = (known after apply) 2025-09-20 08:55:45.238889 | orchestrator | 08:55:45.237 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.238896 | orchestrator | 08:55:45.237 STDOUT terraform:  + mtu = (known after apply) 2025-09-20 08:55:45.238902 | orchestrator | 08:55:45.237 STDOUT terraform:  + name = "net-testbed-management" 2025-09-20 08:55:45.238909 | orchestrator | 08:55:45.237 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.238916 | orchestrator | 08:55:45.237 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.238923 | orchestrator | 08:55:45.237 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.238929 | orchestrator | 08:55:45.237 STDOUT terraform:  + shared = (known after apply) 2025-09-20 08:55:45.238936 | orchestrator | 08:55:45.237 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.238943 | orchestrator | 08:55:45.237 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-20 08:55:45.238949 | orchestrator | 08:55:45.237 STDOUT terraform:  + segments (known after apply) 2025-09-20 08:55:45.238956 | orchestrator | 08:55:45.237 STDOUT terraform:  } 2025-09-20 08:55:45.238963 | orchestrator | 08:55:45.237 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-20 08:55:45.238970 | orchestrator | 08:55:45.237 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-20 08:55:45.238976 | orchestrator | 08:55:45.237 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.238983 | orchestrator | 08:55:45.237 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.238990 | orchestrator | 08:55:45.238 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.238996 | orchestrator | 08:55:45.238 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.239006 | orchestrator | 08:55:45.238 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.239018 | orchestrator | 08:55:45.238 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.239025 | orchestrator | 08:55:45.238 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.239032 | orchestrator | 08:55:45.238 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.239039 | orchestrator | 08:55:45.238 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.239045 | orchestrator | 08:55:45.238 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.239052 | orchestrator | 08:55:45.238 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.239058 | orchestrator | 08:55:45.238 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.239065 | orchestrator | 08:55:45.238 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.239072 | orchestrator | 08:55:45.238 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.239079 | orchestrator | 08:55:45.238 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.239085 | orchestrator | 08:55:45.238 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.239092 | orchestrator | 08:55:45.238 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.239099 | orchestrator | 08:55:45.238 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.239105 | orchestrator | 08:55:45.238 STDOUT terraform:  } 2025-09-20 08:55:45.239112 | orchestrator | 08:55:45.238 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.239119 | orchestrator | 08:55:45.238 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.239126 | orchestrator | 08:55:45.238 STDOUT terraform:  } 2025-09-20 08:55:45.239136 | orchestrator | 08:55:45.238 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.239143 | orchestrator | 08:55:45.238 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.239149 | orchestrator | 08:55:45.238 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-20 08:55:45.239156 | orchestrator | 08:55:45.238 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.239163 | orchestrator | 08:55:45.238 STDOUT terraform:  } 2025-09-20 08:55:45.239170 | orchestrator | 08:55:45.238 STDOUT terraform:  } 2025-09-20 08:55:45.239176 | orchestrator | 08:55:45.238 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-20 08:55:45.239183 | orchestrator | 08:55:45.238 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 08:55:45.239196 | orchestrator | 08:55:45.238 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.239203 | orchestrator | 08:55:45.238 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.239209 | orchestrator | 08:55:45.238 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.239216 | orchestrator | 08:55:45.238 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.239223 | orchestrator | 08:55:45.238 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.239233 | orchestrator | 08:55:45.238 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.239240 | orchestrator | 08:55:45.238 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.239247 | orchestrator | 08:55:45.238 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.239253 | orchestrator | 08:55:45.238 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.239260 | orchestrator | 08:55:45.239 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.239267 | orchestrator | 08:55:45.239 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.239274 | orchestrator | 08:55:45.239 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.239283 | orchestrator | 08:55:45.239 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.239289 | orchestrator | 08:55:45.239 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.239296 | orchestrator | 08:55:45.239 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.239303 | orchestrator | 08:55:45.239 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.239310 | orchestrator | 08:55:45.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.239316 | orchestrator | 08:55:45.239 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.239323 | orchestrator | 08:55:45.239 STDOUT terraform:  } 2025-09-20 08:55:45.239332 | orchestrator | 08:55:45.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.239339 | orchestrator | 08:55:45.239 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 08:55:45.239346 | orchestrator | 08:55:45.239 STDOUT terraform:  } 2025-09-20 08:55:45.239355 | orchestrator | 08:55:45.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.240302 | orchestrator | 08:55:45.239 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.240344 | orchestrator | 08:55:45.239 STDOUT terraform:  } 2025-09-20 08:55:45.240358 | orchestrator | 08:55:45.239 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.240363 | orchestrator | 08:55:45.239 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 08:55:45.240367 | orchestrator | 08:55:45.239 STDOUT terraform:  } 2025-09-20 08:55:45.240371 | orchestrator | 08:55:45.239 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.240375 | orchestrator | 08:55:45.239 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.240379 | orchestrator | 08:55:45.239 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-20 08:55:45.240383 | orchestrator | 08:55:45.239 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.240387 | orchestrator | 08:55:45.239 STDOUT terraform:  } 2025-09-20 08:55:45.240391 | orchestrator | 08:55:45.239 STDOUT terraform:  } 2025-09-20 08:55:45.240395 | orchestrator | 08:55:45.239 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-20 08:55:45.240400 | orchestrator | 08:55:45.239 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 08:55:45.240411 | orchestrator | 08:55:45.239 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.240427 | orchestrator | 08:55:45.239 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.240431 | orchestrator | 08:55:45.239 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.240435 | orchestrator | 08:55:45.239 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.240439 | orchestrator | 08:55:45.239 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.240442 | orchestrator | 08:55:45.239 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.240446 | orchestrator | 08:55:45.239 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.240450 | orchestrator | 08:55:45.239 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.240454 | orchestrator | 08:55:45.239 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.240457 | orchestrator | 08:55:45.239 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.240461 | orchestrator | 08:55:45.239 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.240465 | orchestrator | 08:55:45.239 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.240468 | orchestrator | 08:55:45.239 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.240472 | orchestrator | 08:55:45.240 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.240476 | orchestrator | 08:55:45.240 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.240480 | orchestrator | 08:55:45.240 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.240483 | orchestrator | 08:55:45.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.240487 | orchestrator | 08:55:45.240 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.240491 | orchestrator | 08:55:45.240 STDOUT terraform:  } 2025-09-20 08:55:45.240495 | orchestrator | 08:55:45.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.240499 | orchestrator | 08:55:45.240 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 08:55:45.240502 | orchestrator | 08:55:45.240 STDOUT terraform:  } 2025-09-20 08:55:45.240506 | orchestrator | 08:55:45.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.240510 | orchestrator | 08:55:45.240 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.240514 | orchestrator | 08:55:45.240 STDOUT terraform:  } 2025-09-20 08:55:45.240517 | orchestrator | 08:55:45.240 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.240521 | orchestrator | 08:55:45.240 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 08:55:45.240525 | orchestrator | 08:55:45.240 STDOUT terraform:  } 2025-09-20 08:55:45.240534 | orchestrator | 08:55:45.240 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.240538 | orchestrator | 08:55:45.240 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.240545 | orchestrator | 08:55:45.240 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-20 08:55:45.240548 | orchestrator | 08:55:45.240 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.240552 | orchestrator | 08:55:45.240 STDOUT terraform:  } 2025-09-20 08:55:45.240556 | orchestrator | 08:55:45.240 STDOUT terraform:  } 2025-09-20 08:55:45.240560 | orchestrator | 08:55:45.240 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-20 08:55:45.240563 | orchestrator | 08:55:45.240 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 08:55:45.240567 | orchestrator | 08:55:45.240 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.240573 | orchestrator | 08:55:45.240 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.240602 | orchestrator | 08:55:45.240 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.240635 | orchestrator | 08:55:45.240 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.240670 | orchestrator | 08:55:45.240 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.240706 | orchestrator | 08:55:45.240 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.240743 | orchestrator | 08:55:45.240 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.240777 | orchestrator | 08:55:45.240 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.240814 | orchestrator | 08:55:45.240 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.240850 | orchestrator | 08:55:45.240 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.240886 | orchestrator | 08:55:45.240 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.240921 | orchestrator | 08:55:45.240 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.240956 | orchestrator | 08:55:45.240 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.240993 | orchestrator | 08:55:45.240 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.241027 | orchestrator | 08:55:45.240 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.241064 | orchestrator | 08:55:45.241 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.241071 | orchestrator | 08:55:45.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.241108 | orchestrator | 08:55:45.241 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.241115 | orchestrator | 08:55:45.241 STDOUT terraform:  } 2025-09-20 08:55:45.241138 | orchestrator | 08:55:45.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.241167 | orchestrator | 08:55:45.241 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 08:55:45.241173 | orchestrator | 08:55:45.241 STDOUT terraform:  } 2025-09-20 08:55:45.241196 | orchestrator | 08:55:45.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.241224 | orchestrator | 08:55:45.241 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.241233 | orchestrator | 08:55:45.241 STDOUT terraform:  } 2025-09-20 08:55:45.241251 | orchestrator | 08:55:45.241 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.241279 | orchestrator | 08:55:45.241 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 08:55:45.241286 | orchestrator | 08:55:45.241 STDOUT terraform:  } 2025-09-20 08:55:45.241312 | orchestrator | 08:55:45.241 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.241318 | orchestrator | 08:55:45.241 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.241347 | orchestrator | 08:55:45.241 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-20 08:55:45.241377 | orchestrator | 08:55:45.241 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.241384 | orchestrator | 08:55:45.241 STDOUT terraform:  } 2025-09-20 08:55:45.241389 | orchestrator | 08:55:45.241 STDOUT terraform:  } 2025-09-20 08:55:45.241487 | orchestrator | 08:55:45.241 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-20 08:55:45.241531 | orchestrator | 08:55:45.241 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 08:55:45.241567 | orchestrator | 08:55:45.241 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.241603 | orchestrator | 08:55:45.241 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.241638 | orchestrator | 08:55:45.241 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.241675 | orchestrator | 08:55:45.241 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.241712 | orchestrator | 08:55:45.241 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.241749 | orchestrator | 08:55:45.241 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.241785 | orchestrator | 08:55:45.241 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.241821 | orchestrator | 08:55:45.241 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.241859 | orchestrator | 08:55:45.241 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.241894 | orchestrator | 08:55:45.241 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.241928 | orchestrator | 08:55:45.241 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.241963 | orchestrator | 08:55:45.241 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.241998 | orchestrator | 08:55:45.241 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.242053 | orchestrator | 08:55:45.241 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.242089 | orchestrator | 08:55:45.242 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.242124 | orchestrator | 08:55:45.242 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.242166 | orchestrator | 08:55:45.242 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.242195 | orchestrator | 08:55:45.242 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.242207 | orchestrator | 08:55:45.242 STDOUT terraform:  } 2025-09-20 08:55:45.242212 | orchestrator | 08:55:45.242 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.242250 | orchestrator | 08:55:45.242 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 08:55:45.242257 | orchestrator | 08:55:45.242 STDOUT terraform:  } 2025-09-20 08:55:45.242281 | orchestrator | 08:55:45.242 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.242311 | orchestrator | 08:55:45.242 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.242316 | orchestrator | 08:55:45.242 STDOUT terraform:  } 2025-09-20 08:55:45.242340 | orchestrator | 08:55:45.242 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.242368 | orchestrator | 08:55:45.242 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 08:55:45.242373 | orchestrator | 08:55:45.242 STDOUT terraform:  } 2025-09-20 08:55:45.242401 | orchestrator | 08:55:45.242 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.242407 | orchestrator | 08:55:45.242 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.242449 | orchestrator | 08:55:45.242 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-20 08:55:45.242468 | orchestrator | 08:55:45.242 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.242473 | orchestrator | 08:55:45.242 STDOUT terraform:  } 2025-09-20 08:55:45.242478 | orchestrator | 08:55:45.242 STDOUT terraform:  } 2025-09-20 08:55:45.242531 | orchestrator | 08:55:45.242 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-20 08:55:45.242577 | orchestrator | 08:55:45.242 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 08:55:45.242613 | orchestrator | 08:55:45.242 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.242650 | orchestrator | 08:55:45.242 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.242686 | orchestrator | 08:55:45.242 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.242722 | orchestrator | 08:55:45.242 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.242757 | orchestrator | 08:55:45.242 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.242792 | orchestrator | 08:55:45.242 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.242828 | orchestrator | 08:55:45.242 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.242863 | orchestrator | 08:55:45.242 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.242898 | orchestrator | 08:55:45.242 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.242934 | orchestrator | 08:55:45.242 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.242970 | orchestrator | 08:55:45.242 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.243004 | orchestrator | 08:55:45.242 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.243040 | orchestrator | 08:55:45.242 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.243076 | orchestrator | 08:55:45.243 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.243111 | orchestrator | 08:55:45.243 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.243147 | orchestrator | 08:55:45.243 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.243153 | orchestrator | 08:55:45.243 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.243191 | orchestrator | 08:55:45.243 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.243195 | orchestrator | 08:55:45.243 STDOUT terraform:  } 2025-09-20 08:55:45.243218 | orchestrator | 08:55:45.243 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.243249 | orchestrator | 08:55:45.243 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 08:55:45.243254 | orchestrator | 08:55:45.243 STDOUT terraform:  } 2025-09-20 08:55:45.243278 | orchestrator | 08:55:45.243 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.243306 | orchestrator | 08:55:45.243 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.243311 | orchestrator | 08:55:45.243 STDOUT terraform:  } 2025-09-20 08:55:45.243335 | orchestrator | 08:55:45.243 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.243362 | orchestrator | 08:55:45.243 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 08:55:45.243369 | orchestrator | 08:55:45.243 STDOUT terraform:  } 2025-09-20 08:55:45.243396 | orchestrator | 08:55:45.243 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.243403 | orchestrator | 08:55:45.243 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.243449 | orchestrator | 08:55:45.243 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-20 08:55:45.243480 | orchestrator | 08:55:45.243 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.243486 | orchestrator | 08:55:45.243 STDOUT terraform:  } 2025-09-20 08:55:45.243492 | orchestrator | 08:55:45.243 STDOUT terraform:  } 2025-09-20 08:55:45.243547 | orchestrator | 08:55:45.243 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-20 08:55:45.243591 | orchestrator | 08:55:45.243 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-20 08:55:45.243626 | orchestrator | 08:55:45.243 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.243662 | orchestrator | 08:55:45.243 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-20 08:55:45.243696 | orchestrator | 08:55:45.243 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-20 08:55:45.243731 | orchestrator | 08:55:45.243 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.243766 | orchestrator | 08:55:45.243 STDOUT terraform:  + device_id = (known after apply) 2025-09-20 08:55:45.243807 | orchestrator | 08:55:45.243 STDOUT terraform:  + device_owner = (known after apply) 2025-09-20 08:55:45.243843 | orchestrator | 08:55:45.243 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-20 08:55:45.243887 | orchestrator | 08:55:45.243 STDOUT terraform:  + dns_name = (known after apply) 2025-09-20 08:55:45.243921 | orchestrator | 08:55:45.243 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.243957 | orchestrator | 08:55:45.243 STDOUT terraform:  + mac_address = (known after apply) 2025-09-20 08:55:45.243992 | orchestrator | 08:55:45.243 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.244027 | orchestrator | 08:55:45.243 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-20 08:55:45.244063 | orchestrator | 08:55:45.244 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-20 08:55:45.244100 | orchestrator | 08:55:45.244 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.244135 | orchestrator | 08:55:45.244 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-20 08:55:45.244171 | orchestrator | 08:55:45.244 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.244189 | orchestrator | 08:55:45.244 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.244216 | orchestrator | 08:55:45.244 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-20 08:55:45.244222 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244240 | orchestrator | 08:55:45.244 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.244271 | orchestrator | 08:55:45.244 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-20 08:55:45.244276 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244300 | orchestrator | 08:55:45.244 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.244328 | orchestrator | 08:55:45.244 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-20 08:55:45.244333 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244351 | orchestrator | 08:55:45.244 STDOUT terraform:  + allowed_address_pairs { 2025-09-20 08:55:45.244380 | orchestrator | 08:55:45.244 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-20 08:55:45.244386 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244413 | orchestrator | 08:55:45.244 STDOUT terraform:  + binding (known after apply) 2025-09-20 08:55:45.244428 | orchestrator | 08:55:45.244 STDOUT terraform:  + fixed_ip { 2025-09-20 08:55:45.244457 | orchestrator | 08:55:45.244 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-20 08:55:45.244485 | orchestrator | 08:55:45.244 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.244490 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244495 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244550 | orchestrator | 08:55:45.244 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-20 08:55:45.244598 | orchestrator | 08:55:45.244 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-20 08:55:45.244617 | orchestrator | 08:55:45.244 STDOUT terraform:  + force_destroy = false 2025-09-20 08:55:45.244646 | orchestrator | 08:55:45.244 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.244674 | orchestrator | 08:55:45.244 STDOUT terraform:  + port_id = (known after apply) 2025-09-20 08:55:45.244702 | orchestrator | 08:55:45.244 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.244730 | orchestrator | 08:55:45.244 STDOUT terraform:  + router_id = (known after apply) 2025-09-20 08:55:45.244758 | orchestrator | 08:55:45.244 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-20 08:55:45.244764 | orchestrator | 08:55:45.244 STDOUT terraform:  } 2025-09-20 08:55:45.244804 | orchestrator | 08:55:45.244 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-20 08:55:45.244839 | orchestrator | 08:55:45.244 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-20 08:55:45.244877 | orchestrator | 08:55:45.244 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-20 08:55:45.244913 | orchestrator | 08:55:45.244 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.244937 | orchestrator | 08:55:45.244 STDOUT terraform:  + availability_zone_hints = [ 2025-09-20 08:55:45.244945 | orchestrator | 08:55:45.244 STDOUT terraform:  + "nova", 2025-09-20 08:55:45.244950 | orchestrator | 08:55:45.244 STDOUT terraform:  ] 2025-09-20 08:55:45.244993 | orchestrator | 08:55:45.244 STDOUT terraform:  + distributed = (known after apply) 2025-09-20 08:55:45.245028 | orchestrator | 08:55:45.244 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-20 08:55:45.245081 | orchestrator | 08:55:45.245 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-20 08:55:45.245118 | orchestrator | 08:55:45.245 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-20 08:55:45.245153 | orchestrator | 08:55:45.245 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.245181 | orchestrator | 08:55:45.245 STDOUT terraform:  + name = "testbed" 2025-09-20 08:55:45.245218 | orchestrator | 08:55:45.245 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.245254 | orchestrator | 08:55:45.245 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.245284 | orchestrator | 08:55:45.245 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-20 08:55:45.245289 | orchestrator | 08:55:45.245 STDOUT terraform:  } 2025-09-20 08:55:45.245347 | orchestrator | 08:55:45.245 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-20 08:55:45.245400 | orchestrator | 08:55:45.245 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-20 08:55:45.245431 | orchestrator | 08:55:45.245 STDOUT terraform:  + description = "ssh" 2025-09-20 08:55:45.245461 | orchestrator | 08:55:45.245 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.245486 | orchestrator | 08:55:45.245 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.245523 | orchestrator | 08:55:45.245 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.245548 | orchestrator | 08:55:45.245 STDOUT terraform:  + port_range_max = 22 2025-09-20 08:55:45.245567 | orchestrator | 08:55:45.245 STDOUT terraform:  + port_range_min = 22 2025-09-20 08:55:45.245594 | orchestrator | 08:55:45.245 STDOUT terraform:  + protocol = "tcp" 2025-09-20 08:55:45.245631 | orchestrator | 08:55:45.245 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.245663 | orchestrator | 08:55:45.245 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.245698 | orchestrator | 08:55:45.245 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.245727 | orchestrator | 08:55:45.245 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.245762 | orchestrator | 08:55:45.245 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.245799 | orchestrator | 08:55:45.245 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.245804 | orchestrator | 08:55:45.245 STDOUT terraform:  } 2025-09-20 08:55:45.245861 | orchestrator | 08:55:45.245 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-20 08:55:45.245916 | orchestrator | 08:55:45.245 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-20 08:55:45.245945 | orchestrator | 08:55:45.245 STDOUT terraform:  + description = "wireguard" 2025-09-20 08:55:45.245974 | orchestrator | 08:55:45.245 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.245998 | orchestrator | 08:55:45.245 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.246082 | orchestrator | 08:55:45.245 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.246109 | orchestrator | 08:55:45.246 STDOUT terraform:  + port_range_max = 51820 2025-09-20 08:55:45.246133 | orchestrator | 08:55:45.246 STDOUT terraform:  + port_range_min = 51820 2025-09-20 08:55:45.246159 | orchestrator | 08:55:45.246 STDOUT terraform:  + protocol = "udp" 2025-09-20 08:55:45.246196 | orchestrator | 08:55:45.246 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.246253 | orchestrator | 08:55:45.246 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.246302 | orchestrator | 08:55:45.246 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.246331 | orchestrator | 08:55:45.246 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.246370 | orchestrator | 08:55:45.246 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.246403 | orchestrator | 08:55:45.246 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.246409 | orchestrator | 08:55:45.246 STDOUT terraform:  } 2025-09-20 08:55:45.246477 | orchestrator | 08:55:45.246 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-20 08:55:45.246530 | orchestrator | 08:55:45.246 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-20 08:55:45.246559 | orchestrator | 08:55:45.246 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.246586 | orchestrator | 08:55:45.246 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.246623 | orchestrator | 08:55:45.246 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.246648 | orchestrator | 08:55:45.246 STDOUT terraform:  + protocol = "tcp" 2025-09-20 08:55:45.246686 | orchestrator | 08:55:45.246 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.246722 | orchestrator | 08:55:45.246 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.246757 | orchestrator | 08:55:45.246 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.246791 | orchestrator | 08:55:45.246 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-20 08:55:45.246826 | orchestrator | 08:55:45.246 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.246861 | orchestrator | 08:55:45.246 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.246868 | orchestrator | 08:55:45.246 STDOUT terraform:  } 2025-09-20 08:55:45.246922 | orchestrator | 08:55:45.246 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-20 08:55:45.246974 | orchestrator | 08:55:45.246 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-20 08:55:45.247002 | orchestrator | 08:55:45.246 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.247027 | orchestrator | 08:55:45.246 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.247064 | orchestrator | 08:55:45.247 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.247088 | orchestrator | 08:55:45.247 STDOUT terraform:  + protocol = "udp" 2025-09-20 08:55:45.247122 | orchestrator | 08:55:45.247 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.247159 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.247195 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.247229 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-20 08:55:45.247263 | orchestrator | 08:55:45.247 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.247299 | orchestrator | 08:55:45.247 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.247305 | orchestrator | 08:55:45.247 STDOUT terraform:  } 2025-09-20 08:55:45.247361 | orchestrator | 08:55:45.247 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-20 08:55:45.247413 | orchestrator | 08:55:45.247 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-20 08:55:45.247461 | orchestrator | 08:55:45.247 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.247486 | orchestrator | 08:55:45.247 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.247523 | orchestrator | 08:55:45.247 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.247548 | orchestrator | 08:55:45.247 STDOUT terraform:  + protocol = "icmp" 2025-09-20 08:55:45.247584 | orchestrator | 08:55:45.247 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.247619 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.247654 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.247683 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.247740 | orchestrator | 08:55:45.247 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.247747 | orchestrator | 08:55:45.247 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.247752 | orchestrator | 08:55:45.247 STDOUT terraform:  } 2025-09-20 08:55:45.247808 | orchestrator | 08:55:45.247 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-20 08:55:45.247857 | orchestrator | 08:55:45.247 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-20 08:55:45.247885 | orchestrator | 08:55:45.247 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.247909 | orchestrator | 08:55:45.247 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.247945 | orchestrator | 08:55:45.247 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.247968 | orchestrator | 08:55:45.247 STDOUT terraform:  + protocol = "tcp" 2025-09-20 08:55:45.248004 | orchestrator | 08:55:45.247 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.248039 | orchestrator | 08:55:45.247 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.248074 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.248105 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.248140 | orchestrator | 08:55:45.248 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.248177 | orchestrator | 08:55:45.248 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.248183 | orchestrator | 08:55:45.248 STDOUT terraform:  } 2025-09-20 08:55:45.248236 | orchestrator | 08:55:45.248 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-20 08:55:45.248286 | orchestrator | 08:55:45.248 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-20 08:55:45.248316 | orchestrator | 08:55:45.248 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.248340 | orchestrator | 08:55:45.248 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.248376 | orchestrator | 08:55:45.248 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.248400 | orchestrator | 08:55:45.248 STDOUT terraform:  + protocol = "udp" 2025-09-20 08:55:45.248458 | orchestrator | 08:55:45.248 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.248493 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.248535 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.248564 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.248600 | orchestrator | 08:55:45.248 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.248635 | orchestrator | 08:55:45.248 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.248641 | orchestrator | 08:55:45.248 STDOUT terraform:  } 2025-09-20 08:55:45.248700 | orchestrator | 08:55:45.248 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-20 08:55:45.248751 | orchestrator | 08:55:45.248 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-20 08:55:45.248780 | orchestrator | 08:55:45.248 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.248806 | orchestrator | 08:55:45.248 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.248854 | orchestrator | 08:55:45.248 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.248880 | orchestrator | 08:55:45.248 STDOUT terraform:  + protocol = "icmp" 2025-09-20 08:55:45.248915 | orchestrator | 08:55:45.248 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.248950 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.248985 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.249019 | orchestrator | 08:55:45.248 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.249064 | orchestrator | 08:55:45.249 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.249100 | orchestrator | 08:55:45.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.249107 | orchestrator | 08:55:45.249 STDOUT terraform:  } 2025-09-20 08:55:45.249160 | orchestrator | 08:55:45.249 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-20 08:55:45.249223 | orchestrator | 08:55:45.249 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-20 08:55:45.249249 | orchestrator | 08:55:45.249 STDOUT terraform:  + description = "vrrp" 2025-09-20 08:55:45.249278 | orchestrator | 08:55:45.249 STDOUT terraform:  + direction = "ingress" 2025-09-20 08:55:45.249302 | orchestrator | 08:55:45.249 STDOUT terraform:  + ethertype = "IPv4" 2025-09-20 08:55:45.249342 | orchestrator | 08:55:45.249 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.249367 | orchestrator | 08:55:45.249 STDOUT terraform:  + protocol = "112" 2025-09-20 08:55:45.249407 | orchestrator | 08:55:45.249 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.249458 | orchestrator | 08:55:45.249 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-20 08:55:45.249493 | orchestrator | 08:55:45.249 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-20 08:55:45.249521 | orchestrator | 08:55:45.249 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-20 08:55:45.249562 | orchestrator | 08:55:45.249 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-20 08:55:45.249607 | orchestrator | 08:55:45.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.249613 | orchestrator | 08:55:45.249 STDOUT terraform:  } 2025-09-20 08:55:45.249663 | orchestrator | 08:55:45.249 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-20 08:55:45.249710 | orchestrator | 08:55:45.249 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-20 08:55:45.249737 | orchestrator | 08:55:45.249 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.249769 | orchestrator | 08:55:45.249 STDOUT terraform:  + description = "management security group" 2025-09-20 08:55:45.249807 | orchestrator | 08:55:45.249 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.249836 | orchestrator | 08:55:45.249 STDOUT terraform:  + name = "testbed-management" 2025-09-20 08:55:45.249863 | orchestrator | 08:55:45.249 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.249890 | orchestrator | 08:55:45.249 STDOUT terraform:  + stateful = (known after apply) 2025-09-20 08:55:45.249956 | orchestrator | 08:55:45.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.249962 | orchestrator | 08:55:45.249 STDOUT terraform:  } 2025-09-20 08:55:45.250026 | orchestrator | 08:55:45.249 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-20 08:55:45.250106 | orchestrator | 08:55:45.250 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-20 08:55:45.250135 | orchestrator | 08:55:45.250 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.250159 | orchestrator | 08:55:45.250 STDOUT terraform:  + description = "node security group" 2025-09-20 08:55:45.250192 | orchestrator | 08:55:45.250 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.250217 | orchestrator | 08:55:45.250 STDOUT terraform:  + name = "testbed-node" 2025-09-20 08:55:45.250250 | orchestrator | 08:55:45.250 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.250277 | orchestrator | 08:55:45.250 STDOUT terraform:  + stateful = (known after apply) 2025-09-20 08:55:45.250304 | orchestrator | 08:55:45.250 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.250327 | orchestrator | 08:55:45.250 STDOUT terraform:  } 2025-09-20 08:55:45.250371 | orchestrator | 08:55:45.250 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-20 08:55:45.250433 | orchestrator | 08:55:45.250 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-20 08:55:45.250465 | orchestrator | 08:55:45.250 STDOUT terraform:  + all_tags = (known after apply) 2025-09-20 08:55:45.250499 | orchestrator | 08:55:45.250 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-20 08:55:45.250516 | orchestrator | 08:55:45.250 STDOUT terraform:  + dns_nameservers = [ 2025-09-20 08:55:45.250522 | orchestrator | 08:55:45.250 STDOUT terraform:  + "8.8.8.8", 2025-09-20 08:55:45.250544 | orchestrator | 08:55:45.250 STDOUT terraform:  + "9.9.9.9", 2025-09-20 08:55:45.250555 | orchestrator | 08:55:45.250 STDOUT terraform:  ] 2025-09-20 08:55:45.250579 | orchestrator | 08:55:45.250 STDOUT terraform:  + enable_dhcp = true 2025-09-20 08:55:45.250618 | orchestrator | 08:55:45.250 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-20 08:55:45.250648 | orchestrator | 08:55:45.250 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.250665 | orchestrator | 08:55:45.250 STDOUT terraform:  + ip_version = 4 2025-09-20 08:55:45.250694 | orchestrator | 08:55:45.250 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-20 08:55:45.250730 | orchestrator | 08:55:45.250 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-20 08:55:45.250766 | orchestrator | 08:55:45.250 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-20 08:55:45.250802 | orchestrator | 08:55:45.250 STDOUT terraform:  + network_id = (known after apply) 2025-09-20 08:55:45.250818 | orchestrator | 08:55:45.250 STDOUT terraform:  + no_gateway = false 2025-09-20 08:55:45.250849 | orchestrator | 08:55:45.250 STDOUT terraform:  + region = (known after apply) 2025-09-20 08:55:45.250883 | orchestrator | 08:55:45.250 STDOUT terraform:  + service_types = (known after apply) 2025-09-20 08:55:45.250919 | orchestrator | 08:55:45.250 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-20 08:55:45.250948 | orchestrator | 08:55:45.250 STDOUT terraform:  + allocation_pool { 2025-09-20 08:55:45.250971 | orchestrator | 08:55:45.250 STDOUT terraform:  + end = "192.168.31.250" 2025-09-20 08:55:45.250995 | orchestrator | 08:55:45.250 STDOUT terraform:  + start = "192.168.31.200" 2025-09-20 08:55:45.251001 | orchestrator | 08:55:45.250 STDOUT terraform:  } 2025-09-20 08:55:45.251017 | orchestrator | 08:55:45.251 STDOUT terraform:  } 2025-09-20 08:55:45.251047 | orchestrator | 08:55:45.251 STDOUT terraform:  # terraform_data.image will be created 2025-09-20 08:55:45.251071 | orchestrator | 08:55:45.251 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-20 08:55:45.251095 | orchestrator | 08:55:45.251 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.251120 | orchestrator | 08:55:45.251 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-20 08:55:45.251144 | orchestrator | 08:55:45.251 STDOUT terraform:  + output = (known after apply) 2025-09-20 08:55:45.251173 | orchestrator | 08:55:45.251 STDOUT terraform:  } 2025-09-20 08:55:45.251201 | orchestrator | 08:55:45.251 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-20 08:55:45.251234 | orchestrator | 08:55:45.251 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-20 08:55:45.251257 | orchestrator | 08:55:45.251 STDOUT terraform:  + id = (known after apply) 2025-09-20 08:55:45.251274 | orchestrator | 08:55:45.251 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-20 08:55:45.251296 | orchestrator | 08:55:45.251 STDOUT terraform:  + output = (known after apply) 2025-09-20 08:55:45.251302 | orchestrator | 08:55:45.251 STDOUT terraform:  } 2025-09-20 08:55:45.251333 | orchestrator | 08:55:45.251 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-20 08:55:45.251339 | orchestrator | 08:55:45.251 STDOUT terraform: Changes to Outputs: 2025-09-20 08:55:45.251371 | orchestrator | 08:55:45.251 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-20 08:55:45.251395 | orchestrator | 08:55:45.251 STDOUT terraform:  + private_key = (sensitive value) 2025-09-20 08:55:45.466855 | orchestrator | 08:55:45.466 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-20 08:55:45.466926 | orchestrator | 08:55:45.466 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=6e41ac54-b948-68e4-0f69-3176f605af2b] 2025-09-20 08:55:45.466934 | orchestrator | 08:55:45.466 STDOUT terraform: terraform_data.image: Creating... 2025-09-20 08:55:45.466940 | orchestrator | 08:55:45.466 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=c8e91c4a-4de7-d59a-5c01-b5163bc98741] 2025-09-20 08:55:45.482140 | orchestrator | 08:55:45.481 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-20 08:55:45.493499 | orchestrator | 08:55:45.493 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-20 08:55:45.510219 | orchestrator | 08:55:45.509 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-20 08:55:45.511113 | orchestrator | 08:55:45.510 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-20 08:55:45.511623 | orchestrator | 08:55:45.511 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-20 08:55:45.512311 | orchestrator | 08:55:45.512 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-20 08:55:45.517857 | orchestrator | 08:55:45.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-20 08:55:45.519229 | orchestrator | 08:55:45.519 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-20 08:55:45.520729 | orchestrator | 08:55:45.520 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-20 08:55:45.525208 | orchestrator | 08:55:45.525 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-20 08:55:46.001717 | orchestrator | 08:55:46.000 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-20 08:55:46.010399 | orchestrator | 08:55:46.010 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-20 08:55:46.014397 | orchestrator | 08:55:46.014 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-20 08:55:46.021122 | orchestrator | 08:55:46.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-20 08:55:46.079825 | orchestrator | 08:55:46.079 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-20 08:55:46.086604 | orchestrator | 08:55:46.086 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-20 08:55:47.064233 | orchestrator | 08:55:47.063 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 2s [id=6debd38a-ade5-4b86-b67c-706befdd7370] 2025-09-20 08:55:47.078235 | orchestrator | 08:55:47.078 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-20 08:55:49.162697 | orchestrator | 08:55:49.162 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=41170e96-3e47-41ac-ae12-e293d14045c9] 2025-09-20 08:55:49.586828 | orchestrator | 08:55:49.169 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-20 08:55:49.586951 | orchestrator | 08:55:49.187 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=a6b9e5ea-ad72-4152-982a-d01dd494947d] 2025-09-20 08:55:49.586967 | orchestrator | 08:55:49.196 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-20 08:55:49.586976 | orchestrator | 08:55:49.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=e93e8b04-9e7b-45a5-9708-eecfe0538f8b] 2025-09-20 08:55:49.586985 | orchestrator | 08:55:49.223 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=fb2cb8e7-ed33-4daf-81ac-3030de87c650] 2025-09-20 08:55:49.586994 | orchestrator | 08:55:49.227 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-20 08:55:49.587003 | orchestrator | 08:55:49.231 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-20 08:55:49.587012 | orchestrator | 08:55:49.245 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=358b31db-4e32-4fff-a843-fcadc4546d57] 2025-09-20 08:55:49.587020 | orchestrator | 08:55:49.252 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-20 08:55:49.587029 | orchestrator | 08:55:49.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=91334aab-4987-4e71-91fe-c625707f6cc5] 2025-09-20 08:55:49.587038 | orchestrator | 08:55:49.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-20 08:55:49.587046 | orchestrator | 08:55:49.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=a4838d5a-524e-41b4-858a-00cf9cd1291a] 2025-09-20 08:55:49.587056 | orchestrator | 08:55:49.286 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-20 08:55:49.587065 | orchestrator | 08:55:49.313 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=e1dd809b-bff8-46fb-aa79-1858a713f2a9] 2025-09-20 08:55:49.587074 | orchestrator | 08:55:49.322 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-20 08:55:49.587083 | orchestrator | 08:55:49.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=c2415bc7-a1cc-4fd3-8755-923259240f26] 2025-09-20 08:55:49.587092 | orchestrator | 08:55:49.336 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-20 08:55:50.084296 | orchestrator | 08:55:50.083 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 1s [id=404132cc8c58e7470635a7c58c4af451a78bc9ed] 2025-09-20 08:55:50.084977 | orchestrator | 08:55:50.084 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 1s [id=12ef4fbfb1e931b7c098534c7ae326b9f7347f40] 2025-09-20 08:55:50.262072 | orchestrator | 08:55:50.261 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ab57f89f-c4f7-48e8-b56d-e151be4ca0f4] 2025-09-20 08:55:50.268086 | orchestrator | 08:55:50.267 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-20 08:55:50.427392 | orchestrator | 08:55:50.426 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=4b55da63-4511-4fd7-bb36-723ea56fdd0c] 2025-09-20 08:55:52.591160 | orchestrator | 08:55:52.590 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=faf2d779-3741-4333-9a6a-67d0ebd0d2e8] 2025-09-20 08:55:52.619974 | orchestrator | 08:55:52.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=5810f150-0213-4ab8-9336-aa67cac6df2b] 2025-09-20 08:55:52.664052 | orchestrator | 08:55:52.663 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=03b2b7ad-c51a-4c61-a057-9ad554ca1a72] 2025-09-20 08:55:52.689446 | orchestrator | 08:55:52.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=d4ce8cad-bc1d-4843-90cc-8408c6fa71a6] 2025-09-20 08:55:52.712680 | orchestrator | 08:55:52.712 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=b9271ccd-95e0-4362-9036-036ce1f0e590] 2025-09-20 08:55:52.719456 | orchestrator | 08:55:52.719 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=ed050e13-890c-4196-a879-2427cfc2dfe9] 2025-09-20 08:55:53.426404 | orchestrator | 08:55:53.425 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=524d4e47-7c4b-465e-8ac5-f0c226e0ec8a] 2025-09-20 08:55:53.430714 | orchestrator | 08:55:53.430 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-20 08:55:53.433487 | orchestrator | 08:55:53.433 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-20 08:55:53.433809 | orchestrator | 08:55:53.433 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-20 08:55:53.669333 | orchestrator | 08:55:53.669 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=b489221b-2007-4284-a443-3617e67ced90] 2025-09-20 08:55:53.677850 | orchestrator | 08:55:53.677 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-20 08:55:53.677952 | orchestrator | 08:55:53.677 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-20 08:55:53.678127 | orchestrator | 08:55:53.677 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-20 08:55:53.679046 | orchestrator | 08:55:53.678 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-20 08:55:53.682344 | orchestrator | 08:55:53.681 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-20 08:55:53.684217 | orchestrator | 08:55:53.684 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-20 08:55:53.858836 | orchestrator | 08:55:53.858 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=73cf469a-97c4-4d9e-9607-5facad065b52] 2025-09-20 08:55:53.888492 | orchestrator | 08:55:53.888 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=3216d2da-2161-43c7-9e7c-26d84546d6eb] 2025-09-20 08:55:53.899701 | orchestrator | 08:55:53.899 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-20 08:55:53.899806 | orchestrator | 08:55:53.899 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-20 08:55:53.901327 | orchestrator | 08:55:53.901 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-20 08:55:53.903055 | orchestrator | 08:55:53.902 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-20 08:55:54.109050 | orchestrator | 08:55:54.108 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=a5673b2f-09e0-4445-b273-83ab5d155f09] 2025-09-20 08:55:54.122791 | orchestrator | 08:55:54.122 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-20 08:55:54.366235 | orchestrator | 08:55:54.365 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=b7fa57f2-8a5d-4143-b79a-813373dbe297] 2025-09-20 08:55:54.381657 | orchestrator | 08:55:54.381 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-20 08:55:54.408519 | orchestrator | 08:55:54.408 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=716e2b10-c45a-44a5-8830-cd5a8925834b] 2025-09-20 08:55:54.421619 | orchestrator | 08:55:54.421 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-20 08:55:54.571822 | orchestrator | 08:55:54.571 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=4267e73f-7dd8-4295-a8d9-c51df8e290ab] 2025-09-20 08:55:54.590398 | orchestrator | 08:55:54.590 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-20 08:55:54.855450 | orchestrator | 08:55:54.855 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=85ed5f99-1823-4450-82fc-e7ecd274e2d5] 2025-09-20 08:55:54.873457 | orchestrator | 08:55:54.873 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-20 08:55:54.887518 | orchestrator | 08:55:54.887 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=79f28b98-404d-4d96-bed1-b863f1be99bf] 2025-09-20 08:55:54.900571 | orchestrator | 08:55:54.900 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-20 08:55:55.027792 | orchestrator | 08:55:55.027 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=a386892f-53cc-4b77-a168-2c6b740cc10f] 2025-09-20 08:55:55.094132 | orchestrator | 08:55:55.093 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=e45e7d6e-68a8-4ba3-ac90-6abf23822761] 2025-09-20 08:55:55.100633 | orchestrator | 08:55:55.100 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=44560a76-94db-40ba-8e8a-4975761cef61] 2025-09-20 08:55:55.184331 | orchestrator | 08:55:55.183 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=9506956f-9255-40ff-b35c-2d63708bf466] 2025-09-20 08:55:55.376441 | orchestrator | 08:55:55.376 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=3216ba02-6a3c-4b74-896d-76117701bc85] 2025-09-20 08:55:55.444715 | orchestrator | 08:55:55.444 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=c64d4be8-0793-4d41-9878-7b3ef03c75fb] 2025-09-20 08:55:55.609962 | orchestrator | 08:55:55.609 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=07d5b41d-3e1a-4220-a0b2-9ff3ed928012] 2025-09-20 08:55:55.781268 | orchestrator | 08:55:55.780 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=f38d0291-7dd1-4130-8edb-a26031c2f4a8] 2025-09-20 08:55:55.789276 | orchestrator | 08:55:55.788 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=eb7d7870-037c-4bdc-8e98-14f47f9ad215] 2025-09-20 08:55:55.984335 | orchestrator | 08:55:55.983 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=41665cf0-cc54-4b98-bce2-65efee788093] 2025-09-20 08:55:56.014481 | orchestrator | 08:55:56.013 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-20 08:55:56.038340 | orchestrator | 08:55:56.035 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-20 08:55:56.038389 | orchestrator | 08:55:56.035 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-20 08:55:56.038394 | orchestrator | 08:55:56.036 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-20 08:55:56.058081 | orchestrator | 08:55:56.054 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-20 08:55:56.058588 | orchestrator | 08:55:56.058 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-20 08:55:56.058645 | orchestrator | 08:55:56.058 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-20 08:55:57.891167 | orchestrator | 08:55:57.890 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=49169694-3774-4123-bec0-a0c6570b0d1f] 2025-09-20 08:55:57.902500 | orchestrator | 08:55:57.902 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-20 08:55:57.907122 | orchestrator | 08:55:57.906 STDOUT terraform: local_file.inventory: Creating... 2025-09-20 08:55:57.907491 | orchestrator | 08:55:57.907 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-20 08:55:57.913682 | orchestrator | 08:55:57.913 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=deaa4646557bfb95d33ea8dd89d1d7b348950a58] 2025-09-20 08:55:57.915670 | orchestrator | 08:55:57.915 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=877c1e2bc8caf89f0708a92b9075360b519a7fec] 2025-09-20 08:55:58.623972 | orchestrator | 08:55:58.623 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=49169694-3774-4123-bec0-a0c6570b0d1f] 2025-09-20 08:56:06.036632 | orchestrator | 08:56:06.036 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-20 08:56:06.039593 | orchestrator | 08:56:06.039 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-20 08:56:06.039771 | orchestrator | 08:56:06.039 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-20 08:56:06.056048 | orchestrator | 08:56:06.055 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-20 08:56:06.060149 | orchestrator | 08:56:06.059 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-20 08:56:06.073542 | orchestrator | 08:56:06.073 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-20 08:56:16.037806 | orchestrator | 08:56:16.037 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-20 08:56:16.040963 | orchestrator | 08:56:16.040 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-20 08:56:16.041075 | orchestrator | 08:56:16.040 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-20 08:56:16.056383 | orchestrator | 08:56:16.056 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-20 08:56:16.061663 | orchestrator | 08:56:16.061 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-20 08:56:16.074704 | orchestrator | 08:56:16.074 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-20 08:56:26.040207 | orchestrator | 08:56:26.039 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-20 08:56:26.041264 | orchestrator | 08:56:26.040 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-20 08:56:26.041687 | orchestrator | 08:56:26.041 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-20 08:56:26.057467 | orchestrator | 08:56:26.057 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-20 08:56:26.061604 | orchestrator | 08:56:26.061 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-20 08:56:26.075875 | orchestrator | 08:56:26.075 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-20 08:56:26.577861 | orchestrator | 08:56:26.577 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=1eb8dd78-5750-4698-83fc-93e9557a813b] 2025-09-20 08:56:26.639670 | orchestrator | 08:56:26.639 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=a5a2b5a3-40a1-4d15-b436-db9ba56f76af] 2025-09-20 08:56:26.727748 | orchestrator | 08:56:26.727 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=2a5b3b2d-0b4d-4171-8273-3672a23182ed] 2025-09-20 08:56:26.810627 | orchestrator | 08:56:26.810 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=313b195f-289d-4313-bb21-065f69c3daeb] 2025-09-20 08:56:36.042980 | orchestrator | 08:56:36.042 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-09-20 08:56:36.062117 | orchestrator | 08:56:36.061 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-09-20 08:56:37.026877 | orchestrator | 08:56:37.026 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=2150c92a-8aa2-4ed8-b344-259b8cd8f9d9] 2025-09-20 08:56:37.178739 | orchestrator | 08:56:37.178 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=b1980af3-5148-4211-985d-9e72ff8515bc] 2025-09-20 08:56:37.200841 | orchestrator | 08:56:37.195 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-20 08:56:37.207568 | orchestrator | 08:56:37.207 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7326529433246458273] 2025-09-20 08:56:37.219350 | orchestrator | 08:56:37.219 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-20 08:56:37.219753 | orchestrator | 08:56:37.219 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-20 08:56:37.228042 | orchestrator | 08:56:37.225 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-20 08:56:37.245221 | orchestrator | 08:56:37.244 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-20 08:56:37.246047 | orchestrator | 08:56:37.245 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-20 08:56:37.254478 | orchestrator | 08:56:37.254 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-20 08:56:37.262066 | orchestrator | 08:56:37.256 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-20 08:56:37.267922 | orchestrator | 08:56:37.267 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-20 08:56:37.275231 | orchestrator | 08:56:37.275 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-20 08:56:37.305955 | orchestrator | 08:56:37.305 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-20 08:56:40.692774 | orchestrator | 08:56:40.692 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=1eb8dd78-5750-4698-83fc-93e9557a813b/a6b9e5ea-ad72-4152-982a-d01dd494947d] 2025-09-20 08:56:40.782103 | orchestrator | 08:56:40.781 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=2150c92a-8aa2-4ed8-b344-259b8cd8f9d9/e93e8b04-9e7b-45a5-9708-eecfe0538f8b] 2025-09-20 08:56:40.789415 | orchestrator | 08:56:40.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=1eb8dd78-5750-4698-83fc-93e9557a813b/91334aab-4987-4e71-91fe-c625707f6cc5] 2025-09-20 08:56:40.812309 | orchestrator | 08:56:40.811 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=2150c92a-8aa2-4ed8-b344-259b8cd8f9d9/fb2cb8e7-ed33-4daf-81ac-3030de87c650] 2025-09-20 08:56:40.816582 | orchestrator | 08:56:40.816 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=2a5b3b2d-0b4d-4171-8273-3672a23182ed/c2415bc7-a1cc-4fd3-8755-923259240f26] 2025-09-20 08:56:40.840078 | orchestrator | 08:56:40.839 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=2a5b3b2d-0b4d-4171-8273-3672a23182ed/e1dd809b-bff8-46fb-aa79-1858a713f2a9] 2025-09-20 08:56:42.017210 | orchestrator | 08:56:42.016 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=1eb8dd78-5750-4698-83fc-93e9557a813b/358b31db-4e32-4fff-a843-fcadc4546d57] 2025-09-20 08:56:42.450378 | orchestrator | 08:56:42.449 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=2a5b3b2d-0b4d-4171-8273-3672a23182ed/a4838d5a-524e-41b4-858a-00cf9cd1291a] 2025-09-20 08:56:46.925097 | orchestrator | 08:56:46.924 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=2150c92a-8aa2-4ed8-b344-259b8cd8f9d9/41170e96-3e47-41ac-ae12-e293d14045c9] 2025-09-20 08:56:47.305489 | orchestrator | 08:56:47.305 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-20 08:56:57.309396 | orchestrator | 08:56:57.309 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-20 08:56:58.017018 | orchestrator | 08:56:58.016 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=173fcd9e-da5d-4663-95d0-cc1bee8e8dc1] 2025-09-20 08:56:58.033907 | orchestrator | 08:56:58.033 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-20 08:56:58.034103 | orchestrator | 08:56:58.033 STDOUT terraform: Outputs: 2025-09-20 08:56:58.034132 | orchestrator | 08:56:58.033 STDOUT terraform: manager_address = 2025-09-20 08:56:58.034143 | orchestrator | 08:56:58.033 STDOUT terraform: private_key = 2025-09-20 08:56:58.126683 | orchestrator | ok: Runtime: 0:01:18.788081 2025-09-20 08:56:58.150305 | 2025-09-20 08:56:58.150411 | TASK [Create infrastructure (stable)] 2025-09-20 08:56:58.683214 | orchestrator | skipping: Conditional result was False 2025-09-20 08:56:58.703774 | 2025-09-20 08:56:58.703952 | TASK [Fetch manager address] 2025-09-20 08:56:59.120060 | orchestrator | ok 2025-09-20 08:56:59.129609 | 2025-09-20 08:56:59.129795 | TASK [Set manager_host address] 2025-09-20 08:56:59.208865 | orchestrator | ok 2025-09-20 08:56:59.218795 | 2025-09-20 08:56:59.218956 | LOOP [Update ansible collections] 2025-09-20 08:57:00.007401 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 08:57:00.007837 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-20 08:57:00.008437 | orchestrator | Starting galaxy collection install process 2025-09-20 08:57:00.008509 | orchestrator | Process install dependency map 2025-09-20 08:57:00.008547 | orchestrator | Starting collection install process 2025-09-20 08:57:00.008581 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-09-20 08:57:00.008620 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-09-20 08:57:00.008660 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-20 08:57:00.008759 | orchestrator | ok: Item: commons Runtime: 0:00:00.495536 2025-09-20 08:57:00.829780 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-20 08:57:00.829948 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 08:57:00.830001 | orchestrator | Starting galaxy collection install process 2025-09-20 08:57:00.830041 | orchestrator | Process install dependency map 2025-09-20 08:57:00.830080 | orchestrator | Starting collection install process 2025-09-20 08:57:00.830115 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-09-20 08:57:00.830150 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-09-20 08:57:00.830185 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-20 08:57:00.830238 | orchestrator | ok: Item: services Runtime: 0:00:00.559165 2025-09-20 08:57:00.852947 | 2025-09-20 08:57:00.853100 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-20 08:57:11.392277 | orchestrator | ok 2025-09-20 08:57:11.401811 | 2025-09-20 08:57:11.401924 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-20 08:58:11.443559 | orchestrator | ok 2025-09-20 08:58:11.453655 | 2025-09-20 08:58:11.453800 | TASK [Fetch manager ssh hostkey] 2025-09-20 08:58:13.022872 | orchestrator | Output suppressed because no_log was given 2025-09-20 08:58:13.038582 | 2025-09-20 08:58:13.038807 | TASK [Get ssh keypair from terraform environment] 2025-09-20 08:58:13.577398 | orchestrator | ok: Runtime: 0:00:00.006725 2025-09-20 08:58:13.592744 | 2025-09-20 08:58:13.592896 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-20 08:58:13.628749 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-20 08:58:13.638887 | 2025-09-20 08:58:13.639012 | TASK [Run manager part 0] 2025-09-20 08:58:14.443464 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 08:58:14.486161 | orchestrator | 2025-09-20 08:58:14.486213 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-20 08:58:14.486221 | orchestrator | 2025-09-20 08:58:14.486233 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-20 08:58:16.615668 | orchestrator | ok: [testbed-manager] 2025-09-20 08:58:16.615715 | orchestrator | 2025-09-20 08:58:16.615737 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-20 08:58:16.615746 | orchestrator | 2025-09-20 08:58:16.615755 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 08:58:18.251345 | orchestrator | ok: [testbed-manager] 2025-09-20 08:58:18.251407 | orchestrator | 2025-09-20 08:58:18.251417 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-20 08:58:18.850176 | orchestrator | ok: [testbed-manager] 2025-09-20 08:58:18.850289 | orchestrator | 2025-09-20 08:58:18.850301 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-20 08:58:18.888894 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:18.888941 | orchestrator | 2025-09-20 08:58:18.888952 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-20 08:58:18.914209 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:18.914248 | orchestrator | 2025-09-20 08:58:18.914255 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-20 08:58:18.942512 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:18.942562 | orchestrator | 2025-09-20 08:58:18.942568 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-20 08:58:18.963745 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:18.963805 | orchestrator | 2025-09-20 08:58:18.963817 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-20 08:58:18.984663 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:18.984699 | orchestrator | 2025-09-20 08:58:18.984707 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-20 08:58:19.013295 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:19.013348 | orchestrator | 2025-09-20 08:58:19.013359 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-20 08:58:19.038780 | orchestrator | skipping: [testbed-manager] 2025-09-20 08:58:19.038823 | orchestrator | 2025-09-20 08:58:19.038830 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-20 08:58:19.645196 | orchestrator | changed: [testbed-manager] 2025-09-20 08:58:19.645239 | orchestrator | 2025-09-20 08:58:19.645247 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-20 09:00:50.637856 | orchestrator | changed: [testbed-manager] 2025-09-20 09:00:50.637950 | orchestrator | 2025-09-20 09:00:50.637968 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-20 09:02:05.229228 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:05.229313 | orchestrator | 2025-09-20 09:02:05.229330 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-20 09:02:29.312385 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:29.312483 | orchestrator | 2025-09-20 09:02:29.312503 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-20 09:02:38.012492 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:38.012580 | orchestrator | 2025-09-20 09:02:38.012597 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-20 09:02:38.061137 | orchestrator | ok: [testbed-manager] 2025-09-20 09:02:38.061199 | orchestrator | 2025-09-20 09:02:38.061213 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-20 09:02:38.777729 | orchestrator | ok: [testbed-manager] 2025-09-20 09:02:38.777843 | orchestrator | 2025-09-20 09:02:38.777864 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-20 09:02:39.511955 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:39.512037 | orchestrator | 2025-09-20 09:02:39.512052 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-20 09:02:45.928867 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:45.928961 | orchestrator | 2025-09-20 09:02:45.928999 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-20 09:02:51.831773 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:51.831898 | orchestrator | 2025-09-20 09:02:51.831918 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-20 09:02:54.415907 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:54.415949 | orchestrator | 2025-09-20 09:02:54.415955 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-20 09:02:56.081246 | orchestrator | changed: [testbed-manager] 2025-09-20 09:02:56.081325 | orchestrator | 2025-09-20 09:02:56.081342 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-20 09:02:57.138609 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-20 09:02:57.138686 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-20 09:02:57.138701 | orchestrator | 2025-09-20 09:02:57.138713 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-20 09:02:57.181515 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-20 09:02:57.181592 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-20 09:02:57.181607 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-20 09:02:57.181619 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-20 09:03:00.333791 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-20 09:03:00.333901 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-20 09:03:00.333917 | orchestrator | 2025-09-20 09:03:00.333930 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-20 09:03:00.925991 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:00.926104 | orchestrator | 2025-09-20 09:03:00.926122 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-20 09:03:21.346371 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-20 09:03:21.346426 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-20 09:03:21.346437 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-20 09:03:21.346445 | orchestrator | 2025-09-20 09:03:21.346453 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-20 09:03:23.695010 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-20 09:03:23.695044 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-20 09:03:23.695048 | orchestrator | 2025-09-20 09:03:23.695053 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-20 09:03:23.695058 | orchestrator | 2025-09-20 09:03:23.695062 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:03:25.148220 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:25.148254 | orchestrator | 2025-09-20 09:03:25.148356 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-20 09:03:25.192940 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:25.193000 | orchestrator | 2025-09-20 09:03:25.193010 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-20 09:03:25.253366 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:25.253423 | orchestrator | 2025-09-20 09:03:25.253432 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-20 09:03:25.996074 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:25.996157 | orchestrator | 2025-09-20 09:03:25.996173 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-20 09:03:26.689521 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:26.689556 | orchestrator | 2025-09-20 09:03:26.689564 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-20 09:03:27.980039 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-20 09:03:27.980633 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-20 09:03:27.980685 | orchestrator | 2025-09-20 09:03:27.980711 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-20 09:03:29.340805 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:29.340904 | orchestrator | 2025-09-20 09:03:29.340917 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-20 09:03:31.031741 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:03:31.031804 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-20 09:03:31.031818 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:03:31.031854 | orchestrator | 2025-09-20 09:03:31.031868 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-20 09:03:31.087773 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:31.087963 | orchestrator | 2025-09-20 09:03:31.087985 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-20 09:03:31.599254 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:31.599294 | orchestrator | 2025-09-20 09:03:31.599304 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-20 09:03:31.669326 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:31.669371 | orchestrator | 2025-09-20 09:03:31.669383 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-20 09:03:32.465874 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 09:03:32.465957 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:32.465974 | orchestrator | 2025-09-20 09:03:32.465986 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-20 09:03:32.497676 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:32.497746 | orchestrator | 2025-09-20 09:03:32.497760 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-20 09:03:32.534271 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:32.534334 | orchestrator | 2025-09-20 09:03:32.534348 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-20 09:03:32.568522 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:32.568590 | orchestrator | 2025-09-20 09:03:32.568605 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-20 09:03:32.614734 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:32.614818 | orchestrator | 2025-09-20 09:03:32.614857 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-20 09:03:33.276891 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:33.337544 | orchestrator | 2025-09-20 09:03:33.337600 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-20 09:03:33.337614 | orchestrator | 2025-09-20 09:03:33.337626 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:03:34.659109 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:34.659189 | orchestrator | 2025-09-20 09:03:34.659203 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-20 09:03:35.620176 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:35.620259 | orchestrator | 2025-09-20 09:03:35.620276 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:03:35.620292 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-20 09:03:35.620305 | orchestrator | 2025-09-20 09:03:35.841941 | orchestrator | ok: Runtime: 0:05:21.791600 2025-09-20 09:03:35.860020 | 2025-09-20 09:03:35.860169 | TASK [Point out that the log in on the manager is now possible] 2025-09-20 09:03:35.900422 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-20 09:03:35.910471 | 2025-09-20 09:03:35.910623 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-20 09:03:35.948643 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-20 09:03:35.958222 | 2025-09-20 09:03:35.958345 | TASK [Run manager part 1 + 2] 2025-09-20 09:03:36.815282 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-20 09:03:36.866384 | orchestrator | 2025-09-20 09:03:36.866466 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-20 09:03:36.866484 | orchestrator | 2025-09-20 09:03:36.866515 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:03:39.418427 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:39.418545 | orchestrator | 2025-09-20 09:03:39.418583 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-20 09:03:39.449556 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:39.449601 | orchestrator | 2025-09-20 09:03:39.449613 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-20 09:03:39.482624 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:39.482667 | orchestrator | 2025-09-20 09:03:39.482676 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-20 09:03:39.520781 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:39.520824 | orchestrator | 2025-09-20 09:03:39.520831 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-20 09:03:39.582167 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:39.582214 | orchestrator | 2025-09-20 09:03:39.582221 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-20 09:03:39.639598 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:39.639649 | orchestrator | 2025-09-20 09:03:39.639658 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-20 09:03:39.682154 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-20 09:03:39.682201 | orchestrator | 2025-09-20 09:03:39.682210 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-20 09:03:40.428596 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:40.428667 | orchestrator | 2025-09-20 09:03:40.428684 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-20 09:03:40.470780 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:40.470821 | orchestrator | 2025-09-20 09:03:40.470832 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-20 09:03:41.677086 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:41.677163 | orchestrator | 2025-09-20 09:03:41.677183 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-20 09:03:42.212208 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:42.212281 | orchestrator | 2025-09-20 09:03:42.212297 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-20 09:03:43.279636 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:43.279710 | orchestrator | 2025-09-20 09:03:43.279729 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-20 09:03:59.151414 | orchestrator | changed: [testbed-manager] 2025-09-20 09:03:59.151483 | orchestrator | 2025-09-20 09:03:59.151497 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-20 09:03:59.804690 | orchestrator | ok: [testbed-manager] 2025-09-20 09:03:59.804768 | orchestrator | 2025-09-20 09:03:59.804785 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-20 09:03:59.858371 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:03:59.858441 | orchestrator | 2025-09-20 09:03:59.858455 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-20 09:04:00.750779 | orchestrator | changed: [testbed-manager] 2025-09-20 09:04:00.750880 | orchestrator | 2025-09-20 09:04:00.750896 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-20 09:04:01.646780 | orchestrator | changed: [testbed-manager] 2025-09-20 09:04:01.646895 | orchestrator | 2025-09-20 09:04:01.646920 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-20 09:04:02.184263 | orchestrator | changed: [testbed-manager] 2025-09-20 09:04:02.184301 | orchestrator | 2025-09-20 09:04:02.184308 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-20 09:04:02.223075 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-20 09:04:02.223163 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-20 09:04:02.223177 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-20 09:04:02.223189 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-20 09:04:04.410498 | orchestrator | changed: [testbed-manager] 2025-09-20 09:04:04.410555 | orchestrator | 2025-09-20 09:04:04.410564 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-20 09:04:12.680031 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-20 09:04:12.680076 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-20 09:04:12.680086 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-20 09:04:12.680094 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-20 09:04:12.680105 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-20 09:04:12.680112 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-20 09:04:12.680119 | orchestrator | 2025-09-20 09:04:12.680127 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-20 09:04:13.684150 | orchestrator | changed: [testbed-manager] 2025-09-20 09:04:13.684186 | orchestrator | 2025-09-20 09:04:13.684193 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-20 09:04:13.728082 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:04:13.728119 | orchestrator | 2025-09-20 09:04:13.728127 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-20 09:04:16.630763 | orchestrator | changed: [testbed-manager] 2025-09-20 09:04:16.630804 | orchestrator | 2025-09-20 09:04:16.630813 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-20 09:04:16.674410 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:04:16.674483 | orchestrator | 2025-09-20 09:04:16.674498 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-20 09:06:00.323635 | orchestrator | changed: [testbed-manager] 2025-09-20 09:06:00.323738 | orchestrator | 2025-09-20 09:06:00.323758 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-20 09:06:01.501220 | orchestrator | ok: [testbed-manager] 2025-09-20 09:06:01.501256 | orchestrator | 2025-09-20 09:06:01.501263 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:06:01.501270 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-20 09:06:01.501275 | orchestrator | 2025-09-20 09:06:02.083300 | orchestrator | ok: Runtime: 0:02:25.328907 2025-09-20 09:06:02.102157 | 2025-09-20 09:06:02.102332 | TASK [Reboot manager] 2025-09-20 09:06:03.637427 | orchestrator | ok: Runtime: 0:00:00.935597 2025-09-20 09:06:03.646796 | 2025-09-20 09:06:03.646971 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-20 09:06:20.077741 | orchestrator | ok 2025-09-20 09:06:20.094688 | 2025-09-20 09:06:20.094786 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-20 09:07:20.122496 | orchestrator | ok 2025-09-20 09:07:20.129111 | 2025-09-20 09:07:20.129226 | TASK [Deploy manager + bootstrap nodes] 2025-09-20 09:07:22.792486 | orchestrator | 2025-09-20 09:07:22.792681 | orchestrator | # DEPLOY MANAGER 2025-09-20 09:07:22.792704 | orchestrator | 2025-09-20 09:07:22.792719 | orchestrator | + set -e 2025-09-20 09:07:22.792732 | orchestrator | + echo 2025-09-20 09:07:22.792746 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-20 09:07:22.792764 | orchestrator | + echo 2025-09-20 09:07:22.792813 | orchestrator | + cat /opt/manager-vars.sh 2025-09-20 09:07:22.796397 | orchestrator | export NUMBER_OF_NODES=6 2025-09-20 09:07:22.796425 | orchestrator | 2025-09-20 09:07:22.796437 | orchestrator | export CEPH_VERSION=reef 2025-09-20 09:07:22.796450 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-20 09:07:22.796463 | orchestrator | export MANAGER_VERSION=latest 2025-09-20 09:07:22.796485 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-20 09:07:22.796497 | orchestrator | 2025-09-20 09:07:22.796515 | orchestrator | export ARA=false 2025-09-20 09:07:22.796527 | orchestrator | export DEPLOY_MODE=manager 2025-09-20 09:07:22.796545 | orchestrator | export TEMPEST=false 2025-09-20 09:07:22.796571 | orchestrator | export IS_ZUUL=true 2025-09-20 09:07:22.796583 | orchestrator | 2025-09-20 09:07:22.796601 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:07:22.796613 | orchestrator | export EXTERNAL_API=false 2025-09-20 09:07:22.796624 | orchestrator | 2025-09-20 09:07:22.796635 | orchestrator | export IMAGE_USER=ubuntu 2025-09-20 09:07:22.796649 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-20 09:07:22.796660 | orchestrator | 2025-09-20 09:07:22.796671 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-20 09:07:22.796688 | orchestrator | 2025-09-20 09:07:22.796699 | orchestrator | + echo 2025-09-20 09:07:22.796712 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 09:07:22.797404 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 09:07:22.797422 | orchestrator | ++ INTERACTIVE=false 2025-09-20 09:07:22.797436 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 09:07:22.797450 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 09:07:22.797467 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 09:07:22.797481 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 09:07:22.797494 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 09:07:22.797507 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 09:07:22.797519 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 09:07:22.797539 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 09:07:22.797550 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 09:07:22.797561 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 09:07:22.797572 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 09:07:22.797584 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 09:07:22.797603 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 09:07:22.797619 | orchestrator | ++ export ARA=false 2025-09-20 09:07:22.797630 | orchestrator | ++ ARA=false 2025-09-20 09:07:22.797641 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 09:07:22.797653 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 09:07:22.797664 | orchestrator | ++ export TEMPEST=false 2025-09-20 09:07:22.797675 | orchestrator | ++ TEMPEST=false 2025-09-20 09:07:22.797686 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 09:07:22.797696 | orchestrator | ++ IS_ZUUL=true 2025-09-20 09:07:22.797707 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:07:22.797719 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:07:22.797730 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 09:07:22.797741 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 09:07:22.797752 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 09:07:22.797763 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 09:07:22.797809 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 09:07:22.797821 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 09:07:22.797833 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 09:07:22.797843 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 09:07:22.797855 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-20 09:07:22.855673 | orchestrator | + docker version 2025-09-20 09:07:23.135275 | orchestrator | Client: Docker Engine - Community 2025-09-20 09:07:23.135382 | orchestrator | Version: 27.5.1 2025-09-20 09:07:23.135399 | orchestrator | API version: 1.47 2025-09-20 09:07:23.135411 | orchestrator | Go version: go1.22.11 2025-09-20 09:07:23.135423 | orchestrator | Git commit: 9f9e405 2025-09-20 09:07:23.135435 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-20 09:07:23.135447 | orchestrator | OS/Arch: linux/amd64 2025-09-20 09:07:23.135459 | orchestrator | Context: default 2025-09-20 09:07:23.135470 | orchestrator | 2025-09-20 09:07:23.135482 | orchestrator | Server: Docker Engine - Community 2025-09-20 09:07:23.135494 | orchestrator | Engine: 2025-09-20 09:07:23.135506 | orchestrator | Version: 27.5.1 2025-09-20 09:07:23.135517 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-20 09:07:23.135559 | orchestrator | Go version: go1.22.11 2025-09-20 09:07:23.135571 | orchestrator | Git commit: 4c9b3b0 2025-09-20 09:07:23.135582 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-20 09:07:23.135593 | orchestrator | OS/Arch: linux/amd64 2025-09-20 09:07:23.135605 | orchestrator | Experimental: false 2025-09-20 09:07:23.135616 | orchestrator | containerd: 2025-09-20 09:07:23.135627 | orchestrator | Version: 1.7.27 2025-09-20 09:07:23.135638 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-20 09:07:23.135650 | orchestrator | runc: 2025-09-20 09:07:23.135662 | orchestrator | Version: 1.2.5 2025-09-20 09:07:23.135673 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-20 09:07:23.135684 | orchestrator | docker-init: 2025-09-20 09:07:23.135695 | orchestrator | Version: 0.19.0 2025-09-20 09:07:23.135707 | orchestrator | GitCommit: de40ad0 2025-09-20 09:07:23.137686 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-20 09:07:23.149121 | orchestrator | + set -e 2025-09-20 09:07:23.149191 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 09:07:23.149206 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 09:07:23.149220 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 09:07:23.149232 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 09:07:23.149243 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 09:07:23.149255 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 09:07:23.149267 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 09:07:23.149278 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 09:07:23.149296 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 09:07:23.149307 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 09:07:23.149318 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 09:07:23.149330 | orchestrator | ++ export ARA=false 2025-09-20 09:07:23.149341 | orchestrator | ++ ARA=false 2025-09-20 09:07:23.149352 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 09:07:23.149363 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 09:07:23.149373 | orchestrator | ++ export TEMPEST=false 2025-09-20 09:07:23.149384 | orchestrator | ++ TEMPEST=false 2025-09-20 09:07:23.149395 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 09:07:23.149406 | orchestrator | ++ IS_ZUUL=true 2025-09-20 09:07:23.149417 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:07:23.149428 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:07:23.149439 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 09:07:23.149450 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 09:07:23.149469 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 09:07:23.149480 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 09:07:23.149491 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 09:07:23.149502 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 09:07:23.149513 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 09:07:23.149524 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 09:07:23.149535 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 09:07:23.149546 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 09:07:23.149557 | orchestrator | ++ INTERACTIVE=false 2025-09-20 09:07:23.149568 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 09:07:23.149581 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 09:07:23.149928 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 09:07:23.149943 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 09:07:23.149955 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-20 09:07:23.157402 | orchestrator | + set -e 2025-09-20 09:07:23.157927 | orchestrator | + VERSION=reef 2025-09-20 09:07:23.158467 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-20 09:07:23.165122 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-20 09:07:23.165154 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-20 09:07:23.169345 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-20 09:07:23.176289 | orchestrator | + set -e 2025-09-20 09:07:23.176776 | orchestrator | + VERSION=2024.2 2025-09-20 09:07:23.177398 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-20 09:07:23.180952 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-20 09:07:23.180975 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-20 09:07:23.185049 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-20 09:07:23.185973 | orchestrator | ++ semver latest 7.0.0 2025-09-20 09:07:23.252335 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 09:07:23.252373 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 09:07:23.252384 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-20 09:07:23.252396 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-20 09:07:23.353256 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-20 09:07:23.356852 | orchestrator | + source /opt/venv/bin/activate 2025-09-20 09:07:23.358176 | orchestrator | ++ deactivate nondestructive 2025-09-20 09:07:23.358235 | orchestrator | ++ '[' -n '' ']' 2025-09-20 09:07:23.358248 | orchestrator | ++ '[' -n '' ']' 2025-09-20 09:07:23.358259 | orchestrator | ++ hash -r 2025-09-20 09:07:23.358270 | orchestrator | ++ '[' -n '' ']' 2025-09-20 09:07:23.358281 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-20 09:07:23.358431 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-20 09:07:23.358446 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-20 09:07:23.358458 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-20 09:07:23.358533 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-20 09:07:23.358546 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-20 09:07:23.358557 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-20 09:07:23.358632 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-20 09:07:23.358647 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-20 09:07:23.358658 | orchestrator | ++ export PATH 2025-09-20 09:07:23.358724 | orchestrator | ++ '[' -n '' ']' 2025-09-20 09:07:23.358738 | orchestrator | ++ '[' -z '' ']' 2025-09-20 09:07:23.358837 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-20 09:07:23.358850 | orchestrator | ++ PS1='(venv) ' 2025-09-20 09:07:23.358861 | orchestrator | ++ export PS1 2025-09-20 09:07:23.358878 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-20 09:07:23.358889 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-20 09:07:23.358904 | orchestrator | ++ hash -r 2025-09-20 09:07:23.359441 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-20 09:07:24.637089 | orchestrator | 2025-09-20 09:07:24.637194 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-20 09:07:24.637209 | orchestrator | 2025-09-20 09:07:24.637221 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-20 09:07:25.209656 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:25.209768 | orchestrator | 2025-09-20 09:07:25.209783 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-20 09:07:26.217679 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:26.217782 | orchestrator | 2025-09-20 09:07:26.217798 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-20 09:07:26.217811 | orchestrator | 2025-09-20 09:07:26.217823 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:07:28.539769 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:28.539881 | orchestrator | 2025-09-20 09:07:28.539897 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-20 09:07:28.590429 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:28.590507 | orchestrator | 2025-09-20 09:07:28.590524 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-20 09:07:29.059699 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:29.059803 | orchestrator | 2025-09-20 09:07:29.059818 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-20 09:07:29.104850 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:07:29.104935 | orchestrator | 2025-09-20 09:07:29.104949 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-20 09:07:29.430496 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:29.430598 | orchestrator | 2025-09-20 09:07:29.430613 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-20 09:07:29.491606 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:07:29.491670 | orchestrator | 2025-09-20 09:07:29.491686 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-20 09:07:29.849507 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:29.849587 | orchestrator | 2025-09-20 09:07:29.849604 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-20 09:07:29.968110 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:07:29.968171 | orchestrator | 2025-09-20 09:07:29.968184 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-20 09:07:29.968196 | orchestrator | 2025-09-20 09:07:29.968210 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:07:31.710162 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:31.710260 | orchestrator | 2025-09-20 09:07:31.710275 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-20 09:07:31.821706 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-20 09:07:31.821751 | orchestrator | 2025-09-20 09:07:31.821763 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-20 09:07:31.872694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-20 09:07:31.872732 | orchestrator | 2025-09-20 09:07:31.872744 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-20 09:07:33.006005 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-20 09:07:33.006173 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-20 09:07:33.006190 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-20 09:07:33.006202 | orchestrator | 2025-09-20 09:07:33.006215 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-20 09:07:34.634932 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-20 09:07:34.635087 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-20 09:07:34.635107 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-20 09:07:34.635120 | orchestrator | 2025-09-20 09:07:34.635133 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-20 09:07:35.228784 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 09:07:35.228880 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:35.228895 | orchestrator | 2025-09-20 09:07:35.228908 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-20 09:07:35.826290 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 09:07:35.826369 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:35.826381 | orchestrator | 2025-09-20 09:07:35.826390 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-20 09:07:35.879028 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:07:35.879083 | orchestrator | 2025-09-20 09:07:35.879094 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-20 09:07:36.194929 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:36.195040 | orchestrator | 2025-09-20 09:07:36.195052 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-20 09:07:36.258700 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-20 09:07:36.258730 | orchestrator | 2025-09-20 09:07:36.258740 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-20 09:07:37.212366 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:37.212471 | orchestrator | 2025-09-20 09:07:37.212487 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-20 09:07:38.014109 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:38.014209 | orchestrator | 2025-09-20 09:07:38.014224 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-20 09:07:49.180829 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:49.180934 | orchestrator | 2025-09-20 09:07:49.180949 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-20 09:07:49.226230 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:07:49.226304 | orchestrator | 2025-09-20 09:07:49.226318 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-20 09:07:49.226329 | orchestrator | 2025-09-20 09:07:49.226340 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:07:50.919811 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:50.919901 | orchestrator | 2025-09-20 09:07:50.919942 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-20 09:07:51.015186 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-20 09:07:51.015289 | orchestrator | 2025-09-20 09:07:51.015306 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-20 09:07:51.071769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 09:07:51.071850 | orchestrator | 2025-09-20 09:07:51.071864 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-20 09:07:53.633293 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:53.633398 | orchestrator | 2025-09-20 09:07:53.633415 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-20 09:07:53.687064 | orchestrator | ok: [testbed-manager] 2025-09-20 09:07:53.687150 | orchestrator | 2025-09-20 09:07:53.687167 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-20 09:07:53.818686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-20 09:07:53.818740 | orchestrator | 2025-09-20 09:07:53.818752 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-20 09:07:56.778393 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-20 09:07:56.778486 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-20 09:07:56.778498 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-20 09:07:56.778508 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-20 09:07:56.778516 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-20 09:07:56.778525 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-20 09:07:56.778533 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-20 09:07:56.778541 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-20 09:07:56.778550 | orchestrator | 2025-09-20 09:07:56.778559 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-20 09:07:57.434310 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:57.434400 | orchestrator | 2025-09-20 09:07:57.434414 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-20 09:07:58.114667 | orchestrator | changed: [testbed-manager] 2025-09-20 09:07:58.114753 | orchestrator | 2025-09-20 09:07:58.114767 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-20 09:07:58.195297 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-20 09:07:58.195363 | orchestrator | 2025-09-20 09:07:58.195377 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-20 09:07:59.415469 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-20 09:07:59.415561 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-20 09:07:59.415573 | orchestrator | 2025-09-20 09:07:59.415586 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-20 09:08:00.044627 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:00.044719 | orchestrator | 2025-09-20 09:08:00.044733 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-20 09:08:00.098663 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:08:00.098744 | orchestrator | 2025-09-20 09:08:00.098757 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-20 09:08:00.162450 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-20 09:08:00.162528 | orchestrator | 2025-09-20 09:08:00.162540 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-20 09:08:00.787199 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:00.787296 | orchestrator | 2025-09-20 09:08:00.787311 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-20 09:08:00.858803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-20 09:08:00.858915 | orchestrator | 2025-09-20 09:08:00.858932 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-20 09:08:02.272537 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 09:08:02.272641 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 09:08:02.272655 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:02.272666 | orchestrator | 2025-09-20 09:08:02.272676 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-20 09:08:02.926305 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:02.926403 | orchestrator | 2025-09-20 09:08:02.926420 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-20 09:08:02.986935 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:08:02.986992 | orchestrator | 2025-09-20 09:08:02.987006 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-20 09:08:03.087225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-20 09:08:03.087297 | orchestrator | 2025-09-20 09:08:03.087311 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-20 09:08:03.653878 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:03.654007 | orchestrator | 2025-09-20 09:08:03.654075 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-20 09:08:04.074087 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:04.074183 | orchestrator | 2025-09-20 09:08:04.074197 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-20 09:08:05.229059 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-20 09:08:05.229156 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-20 09:08:05.229171 | orchestrator | 2025-09-20 09:08:05.229184 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-20 09:08:05.795604 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:05.795749 | orchestrator | 2025-09-20 09:08:05.795779 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-20 09:08:06.163840 | orchestrator | ok: [testbed-manager] 2025-09-20 09:08:06.163938 | orchestrator | 2025-09-20 09:08:06.163955 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-20 09:08:06.506226 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:06.506312 | orchestrator | 2025-09-20 09:08:06.506327 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-20 09:08:06.541698 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:08:06.541756 | orchestrator | 2025-09-20 09:08:06.541769 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-20 09:08:06.603034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-20 09:08:06.603065 | orchestrator | 2025-09-20 09:08:06.603077 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-20 09:08:06.644399 | orchestrator | ok: [testbed-manager] 2025-09-20 09:08:06.644421 | orchestrator | 2025-09-20 09:08:06.644432 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-20 09:08:08.543650 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-20 09:08:08.543749 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-20 09:08:08.543764 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-20 09:08:08.543776 | orchestrator | 2025-09-20 09:08:08.543788 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-20 09:08:09.195734 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:09.195816 | orchestrator | 2025-09-20 09:08:09.195830 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-20 09:08:09.851835 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:09.851925 | orchestrator | 2025-09-20 09:08:09.851939 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-20 09:08:10.513548 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:10.513644 | orchestrator | 2025-09-20 09:08:10.513660 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-20 09:08:10.583454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-20 09:08:10.583532 | orchestrator | 2025-09-20 09:08:10.583547 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-20 09:08:10.641397 | orchestrator | ok: [testbed-manager] 2025-09-20 09:08:10.641498 | orchestrator | 2025-09-20 09:08:10.641515 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-20 09:08:11.310260 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-20 09:08:11.310355 | orchestrator | 2025-09-20 09:08:11.310370 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-20 09:08:11.402767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-20 09:08:11.402820 | orchestrator | 2025-09-20 09:08:11.402833 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-20 09:08:12.065458 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:12.065555 | orchestrator | 2025-09-20 09:08:12.065570 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-20 09:08:12.585176 | orchestrator | ok: [testbed-manager] 2025-09-20 09:08:12.585270 | orchestrator | 2025-09-20 09:08:12.585285 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-20 09:08:12.641303 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:08:12.641344 | orchestrator | 2025-09-20 09:08:12.641357 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-20 09:08:12.691303 | orchestrator | ok: [testbed-manager] 2025-09-20 09:08:12.691374 | orchestrator | 2025-09-20 09:08:12.691389 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-20 09:08:13.476237 | orchestrator | changed: [testbed-manager] 2025-09-20 09:08:13.477088 | orchestrator | 2025-09-20 09:08:13.477122 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-20 09:09:21.063617 | orchestrator | changed: [testbed-manager] 2025-09-20 09:09:21.063709 | orchestrator | 2025-09-20 09:09:21.063724 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-20 09:09:22.115122 | orchestrator | ok: [testbed-manager] 2025-09-20 09:09:22.115214 | orchestrator | 2025-09-20 09:09:22.115228 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-20 09:09:22.168685 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:09:22.168711 | orchestrator | 2025-09-20 09:09:22.168725 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-20 09:09:24.770812 | orchestrator | changed: [testbed-manager] 2025-09-20 09:09:24.770951 | orchestrator | 2025-09-20 09:09:24.770968 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-20 09:09:24.830508 | orchestrator | ok: [testbed-manager] 2025-09-20 09:09:24.830560 | orchestrator | 2025-09-20 09:09:24.830575 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-20 09:09:24.830587 | orchestrator | 2025-09-20 09:09:24.830598 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-20 09:09:24.896548 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:09:24.896603 | orchestrator | 2025-09-20 09:09:24.896618 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-20 09:10:24.945197 | orchestrator | Pausing for 60 seconds 2025-09-20 09:10:24.945310 | orchestrator | changed: [testbed-manager] 2025-09-20 09:10:24.945326 | orchestrator | 2025-09-20 09:10:24.945340 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-20 09:10:28.995835 | orchestrator | changed: [testbed-manager] 2025-09-20 09:10:28.995936 | orchestrator | 2025-09-20 09:10:28.995954 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-20 09:11:10.603974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-20 09:11:10.604088 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-20 09:11:10.604104 | orchestrator | changed: [testbed-manager] 2025-09-20 09:11:10.604141 | orchestrator | 2025-09-20 09:11:10.604152 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-20 09:11:20.197794 | orchestrator | changed: [testbed-manager] 2025-09-20 09:11:20.197902 | orchestrator | 2025-09-20 09:11:20.197920 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-20 09:11:20.285445 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-20 09:11:20.285502 | orchestrator | 2025-09-20 09:11:20.285516 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-20 09:11:20.285529 | orchestrator | 2025-09-20 09:11:20.285540 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-20 09:11:20.330262 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:11:20.330317 | orchestrator | 2025-09-20 09:11:20.330326 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:11:20.330336 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-20 09:11:20.330343 | orchestrator | 2025-09-20 09:11:20.442969 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-20 09:11:20.443005 | orchestrator | + deactivate 2025-09-20 09:11:20.443015 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-20 09:11:20.443025 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-20 09:11:20.443032 | orchestrator | + export PATH 2025-09-20 09:11:20.443039 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-20 09:11:20.443046 | orchestrator | + '[' -n '' ']' 2025-09-20 09:11:20.443053 | orchestrator | + hash -r 2025-09-20 09:11:20.443077 | orchestrator | + '[' -n '' ']' 2025-09-20 09:11:20.443084 | orchestrator | + unset VIRTUAL_ENV 2025-09-20 09:11:20.443091 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-20 09:11:20.443098 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-20 09:11:20.443105 | orchestrator | + unset -f deactivate 2025-09-20 09:11:20.443112 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-20 09:11:20.451691 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-20 09:11:20.451718 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-20 09:11:20.451726 | orchestrator | + local max_attempts=60 2025-09-20 09:11:20.451733 | orchestrator | + local name=ceph-ansible 2025-09-20 09:11:20.451773 | orchestrator | + local attempt_num=1 2025-09-20 09:11:20.452189 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:11:20.490389 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:11:20.490441 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-20 09:11:20.490455 | orchestrator | + local max_attempts=60 2025-09-20 09:11:20.490468 | orchestrator | + local name=kolla-ansible 2025-09-20 09:11:20.490479 | orchestrator | + local attempt_num=1 2025-09-20 09:11:20.491651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-20 09:11:20.528515 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:11:20.528557 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-20 09:11:20.528569 | orchestrator | + local max_attempts=60 2025-09-20 09:11:20.528581 | orchestrator | + local name=osism-ansible 2025-09-20 09:11:20.528592 | orchestrator | + local attempt_num=1 2025-09-20 09:11:20.529908 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-20 09:11:20.569821 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:11:20.569874 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-20 09:11:20.569888 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-20 09:11:21.305186 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-20 09:11:21.522820 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-20 09:11:21.522899 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.522913 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.522953 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-20 09:11:21.522966 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-20 09:11:21.522987 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.522998 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.523009 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-20 09:11:21.523021 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.523031 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-20 09:11:21.523042 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.523053 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-20 09:11:21.523064 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.523075 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-20 09:11:21.523086 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.523097 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-20 09:11:21.528129 | orchestrator | ++ semver latest 7.0.0 2025-09-20 09:11:21.567842 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 09:11:21.567899 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 09:11:21.567914 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-20 09:11:21.569933 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-20 09:11:33.758197 | orchestrator | 2025-09-20 09:11:33 | INFO  | Task a95a2d5e-c325-4770-873e-04dbfdcbaadd (resolvconf) was prepared for execution. 2025-09-20 09:11:33.758295 | orchestrator | 2025-09-20 09:11:33 | INFO  | It takes a moment until task a95a2d5e-c325-4770-873e-04dbfdcbaadd (resolvconf) has been started and output is visible here. 2025-09-20 09:11:48.445115 | orchestrator | 2025-09-20 09:11:48.445223 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-20 09:11:48.445239 | orchestrator | 2025-09-20 09:11:48.445250 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:11:48.445284 | orchestrator | Saturday 20 September 2025 09:11:37 +0000 (0:00:00.153) 0:00:00.153 **** 2025-09-20 09:11:48.445295 | orchestrator | ok: [testbed-manager] 2025-09-20 09:11:48.445306 | orchestrator | 2025-09-20 09:11:48.445317 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-20 09:11:48.445327 | orchestrator | Saturday 20 September 2025 09:11:42 +0000 (0:00:04.812) 0:00:04.966 **** 2025-09-20 09:11:48.445337 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:11:48.445347 | orchestrator | 2025-09-20 09:11:48.445357 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-20 09:11:48.445367 | orchestrator | Saturday 20 September 2025 09:11:42 +0000 (0:00:00.071) 0:00:05.038 **** 2025-09-20 09:11:48.445377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-20 09:11:48.445388 | orchestrator | 2025-09-20 09:11:48.445397 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-20 09:11:48.445407 | orchestrator | Saturday 20 September 2025 09:11:42 +0000 (0:00:00.080) 0:00:05.118 **** 2025-09-20 09:11:48.445417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 09:11:48.445427 | orchestrator | 2025-09-20 09:11:48.445437 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-20 09:11:48.445446 | orchestrator | Saturday 20 September 2025 09:11:42 +0000 (0:00:00.091) 0:00:05.210 **** 2025-09-20 09:11:48.445456 | orchestrator | ok: [testbed-manager] 2025-09-20 09:11:48.445465 | orchestrator | 2025-09-20 09:11:48.445475 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-20 09:11:48.445485 | orchestrator | Saturday 20 September 2025 09:11:43 +0000 (0:00:01.147) 0:00:06.357 **** 2025-09-20 09:11:48.445494 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:11:48.445504 | orchestrator | 2025-09-20 09:11:48.445513 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-20 09:11:48.445523 | orchestrator | Saturday 20 September 2025 09:11:43 +0000 (0:00:00.068) 0:00:06.426 **** 2025-09-20 09:11:48.445533 | orchestrator | ok: [testbed-manager] 2025-09-20 09:11:48.445542 | orchestrator | 2025-09-20 09:11:48.445552 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-20 09:11:48.445562 | orchestrator | Saturday 20 September 2025 09:11:44 +0000 (0:00:00.526) 0:00:06.952 **** 2025-09-20 09:11:48.445571 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:11:48.445581 | orchestrator | 2025-09-20 09:11:48.445591 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-20 09:11:48.445601 | orchestrator | Saturday 20 September 2025 09:11:44 +0000 (0:00:00.090) 0:00:07.042 **** 2025-09-20 09:11:48.445611 | orchestrator | changed: [testbed-manager] 2025-09-20 09:11:48.445621 | orchestrator | 2025-09-20 09:11:48.445630 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-20 09:11:48.445640 | orchestrator | Saturday 20 September 2025 09:11:44 +0000 (0:00:00.529) 0:00:07.572 **** 2025-09-20 09:11:48.445650 | orchestrator | changed: [testbed-manager] 2025-09-20 09:11:48.445661 | orchestrator | 2025-09-20 09:11:48.445673 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-20 09:11:48.445684 | orchestrator | Saturday 20 September 2025 09:11:45 +0000 (0:00:01.139) 0:00:08.712 **** 2025-09-20 09:11:48.445695 | orchestrator | ok: [testbed-manager] 2025-09-20 09:11:48.445706 | orchestrator | 2025-09-20 09:11:48.445716 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-20 09:11:48.445755 | orchestrator | Saturday 20 September 2025 09:11:46 +0000 (0:00:00.967) 0:00:09.679 **** 2025-09-20 09:11:48.445777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-20 09:11:48.445795 | orchestrator | 2025-09-20 09:11:48.445806 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-20 09:11:48.445818 | orchestrator | Saturday 20 September 2025 09:11:47 +0000 (0:00:00.089) 0:00:09.769 **** 2025-09-20 09:11:48.445829 | orchestrator | changed: [testbed-manager] 2025-09-20 09:11:48.445840 | orchestrator | 2025-09-20 09:11:48.445852 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:11:48.445864 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:11:48.445876 | orchestrator | 2025-09-20 09:11:48.445887 | orchestrator | 2025-09-20 09:11:48.445898 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:11:48.445910 | orchestrator | Saturday 20 September 2025 09:11:48 +0000 (0:00:01.168) 0:00:10.937 **** 2025-09-20 09:11:48.445921 | orchestrator | =============================================================================== 2025-09-20 09:11:48.445932 | orchestrator | Gathering Facts --------------------------------------------------------- 4.81s 2025-09-20 09:11:48.445943 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-09-20 09:11:48.445954 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.15s 2025-09-20 09:11:48.445965 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.14s 2025-09-20 09:11:48.445976 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-09-20 09:11:48.445987 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.53s 2025-09-20 09:11:48.446108 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2025-09-20 09:11:48.446127 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-09-20 09:11:48.446137 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-09-20 09:11:48.446147 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-20 09:11:48.446157 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-20 09:11:48.446167 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-20 09:11:48.446176 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-09-20 09:11:48.902594 | orchestrator | + osism apply sshconfig 2025-09-20 09:12:01.005051 | orchestrator | 2025-09-20 09:12:00 | INFO  | Task c9ab8fe9-d561-47d5-aa6f-c92e34a11b86 (sshconfig) was prepared for execution. 2025-09-20 09:12:01.005165 | orchestrator | 2025-09-20 09:12:01 | INFO  | It takes a moment until task c9ab8fe9-d561-47d5-aa6f-c92e34a11b86 (sshconfig) has been started and output is visible here. 2025-09-20 09:12:12.459819 | orchestrator | 2025-09-20 09:12:12.459930 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-20 09:12:12.459947 | orchestrator | 2025-09-20 09:12:12.459959 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-20 09:12:12.459971 | orchestrator | Saturday 20 September 2025 09:12:04 +0000 (0:00:00.150) 0:00:00.150 **** 2025-09-20 09:12:12.459982 | orchestrator | ok: [testbed-manager] 2025-09-20 09:12:12.459994 | orchestrator | 2025-09-20 09:12:12.460005 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-20 09:12:12.460016 | orchestrator | Saturday 20 September 2025 09:12:05 +0000 (0:00:00.493) 0:00:00.644 **** 2025-09-20 09:12:12.460027 | orchestrator | changed: [testbed-manager] 2025-09-20 09:12:12.460039 | orchestrator | 2025-09-20 09:12:12.460050 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-20 09:12:12.460062 | orchestrator | Saturday 20 September 2025 09:12:05 +0000 (0:00:00.457) 0:00:01.101 **** 2025-09-20 09:12:12.460073 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-20 09:12:12.460084 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-20 09:12:12.460125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-20 09:12:12.460137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-20 09:12:12.460148 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-20 09:12:12.460177 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-20 09:12:12.460189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-20 09:12:12.460200 | orchestrator | 2025-09-20 09:12:12.460211 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-20 09:12:12.460222 | orchestrator | Saturday 20 September 2025 09:12:11 +0000 (0:00:05.722) 0:00:06.824 **** 2025-09-20 09:12:12.460233 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:12:12.460243 | orchestrator | 2025-09-20 09:12:12.460254 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-20 09:12:12.460265 | orchestrator | Saturday 20 September 2025 09:12:11 +0000 (0:00:00.065) 0:00:06.889 **** 2025-09-20 09:12:12.460276 | orchestrator | changed: [testbed-manager] 2025-09-20 09:12:12.460287 | orchestrator | 2025-09-20 09:12:12.460298 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:12:12.460310 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:12:12.460322 | orchestrator | 2025-09-20 09:12:12.460335 | orchestrator | 2025-09-20 09:12:12.460347 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:12:12.460359 | orchestrator | Saturday 20 September 2025 09:12:12 +0000 (0:00:00.594) 0:00:07.483 **** 2025-09-20 09:12:12.460372 | orchestrator | =============================================================================== 2025-09-20 09:12:12.460385 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.72s 2025-09-20 09:12:12.460397 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-09-20 09:12:12.460410 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.49s 2025-09-20 09:12:12.460422 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2025-09-20 09:12:12.460435 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-20 09:12:12.739747 | orchestrator | + osism apply known-hosts 2025-09-20 09:12:24.694291 | orchestrator | 2025-09-20 09:12:24 | INFO  | Task c9d92195-fbdf-4def-b881-535212a7b86d (known-hosts) was prepared for execution. 2025-09-20 09:12:24.694393 | orchestrator | 2025-09-20 09:12:24 | INFO  | It takes a moment until task c9d92195-fbdf-4def-b881-535212a7b86d (known-hosts) has been started and output is visible here. 2025-09-20 09:12:41.565202 | orchestrator | 2025-09-20 09:12:41.565306 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-20 09:12:41.565322 | orchestrator | 2025-09-20 09:12:41.565335 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-20 09:12:41.565347 | orchestrator | Saturday 20 September 2025 09:12:28 +0000 (0:00:00.204) 0:00:00.204 **** 2025-09-20 09:12:41.565359 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-20 09:12:41.565371 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-20 09:12:41.565382 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-20 09:12:41.565393 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-20 09:12:41.565404 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-20 09:12:41.565415 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-20 09:12:41.565426 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-20 09:12:41.565437 | orchestrator | 2025-09-20 09:12:41.565448 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-20 09:12:41.565461 | orchestrator | Saturday 20 September 2025 09:12:34 +0000 (0:00:06.011) 0:00:06.215 **** 2025-09-20 09:12:41.565498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-20 09:12:41.565512 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-20 09:12:41.565523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-20 09:12:41.565533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-20 09:12:41.565544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-20 09:12:41.565565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-20 09:12:41.565577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-20 09:12:41.565588 | orchestrator | 2025-09-20 09:12:41.565599 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:41.565610 | orchestrator | Saturday 20 September 2025 09:12:34 +0000 (0:00:00.197) 0:00:06.413 **** 2025-09-20 09:12:41.565625 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK66i4pDD684Rpy+AuImNmEYmRyrC3wwLFh6u1QGhEITAzC5Yl7xahkx9aGQDcU3K158Jc7oQAZVIR1/buaRvnsat3M+zB+c8A7x6irosKNlKthXMsBhOLfUqtHoQ7FVuGr9qVT12txpJ7lse/AIiHEKp+7DBnrgaDtESFiStiB9jf2UUfrIFN450Whfy8AAG+/5L7Y4K1OI6qv3vBP3jGdDl71HbfGz2EoWcO2L60jPQvR5Ka1SJKpbAaMlQ+o7QskRp5c1bFk7ANDd02MZy6jg2giy66a2o3E0kPbAU0t2qMzq8G8HLTy0LaNrT3K1tnybzHDfrr+Gdz6vm9jbw4h5ZfimDI2YqvA7+LsbbFDALgy/bYY6AjZTvMpmvDPAGjVQMu2bRKJp8m2Zd9zcaa2IHrq8aeAnR0r0ElZji4gXSi5GJ0SSR/OLy4nzSxnncGbnwyBr41gyc8SRpmlN0Z5ypmITUtkgK89tkjtMUv0ezVpn8+Q8G1Eu4+C/9CsSU=) 2025-09-20 09:12:41.565640 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGtLfwfQaIzocafvaQm9jrmAvA6QRvoPsSnlR6YvqJKaqgavSXJa6CI+51RNAWGh497JQQ4blABEGZVJq/cbUc=) 2025-09-20 09:12:41.565653 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC8xrJQzXVRxaJisGmYNcW/musvFOO3qUClrEEbVC4jL) 2025-09-20 09:12:41.565665 | orchestrator | 2025-09-20 09:12:41.565676 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:41.565731 | orchestrator | Saturday 20 September 2025 09:12:36 +0000 (0:00:01.199) 0:00:07.612 **** 2025-09-20 09:12:41.565762 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMT/UZq3uKBSHhrLRSg07CQZdf0ZQYw+RxKzznFLvUBZNfENx2zt59Rdj+UZZrZIRofAcQS9k8qzeeJi8kbi6MxP4Aer1J9oGrD6ghIoLeUgs+QjQCRzQMa1u8fLd9gBGsyT3ekCDea1Drj9tREwqRmynnoromGy1UCPdAvZ/ySvA8vHaKgL0pHsRBDHZo153hSssQFWbitJvv6hExClv6U5bZ6ZgMpeWv5GloHwxN+XtWJCuHdHkUb8zQjl6UPoo6H6QUCzywrgY7iAItWsSf0eajzSWw+W+7MQCKX+p6+YaUUnkCVOs3SSwa/S9zlI3jI40g36SekuzsmFhrQiiLbpv1yFMTRGENcuTGV74urkSx2QDNmI4L/Hosq7FYbyGOaVh4CF4cs1QB43el74foUGVGBGDjUZyIhU7DxdpvDGBPBBa5cetUvG2IhoGrCxoFtwNUsZFWYfR9DVk2aJy/i8hbWmGggxJRoTAp3AUTnGQ8ZA9niVCb4IYYdVXhjRs=) 2025-09-20 09:12:41.565776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUvBeHU7qm3bw81juhhVeYNRYITOyOqyIo9ivL0g1KPCH3KBLEzFku14eZ6QUX8KAZ5Ajp+jkNX8p2gJPkSsd0=) 2025-09-20 09:12:41.565789 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOyWkgvdKxM1pm28vvhWdrRNyamnpcfU9I47GNVgiagI) 2025-09-20 09:12:41.565812 | orchestrator | 2025-09-20 09:12:41.565827 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:41.565839 | orchestrator | Saturday 20 September 2025 09:12:37 +0000 (0:00:01.068) 0:00:08.681 **** 2025-09-20 09:12:41.565851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgrYoVlQYVivCXij7Iac6g7QUxNTv0GhFkcejbyI4jL) 2025-09-20 09:12:41.565865 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkOfg80epRi6FH+WR+mmTHXL8UdsD45yBm7oKuB7btVtvxGtJBD/eF9Tv4g73MfA9YkpD8wM7lB9j4zwOcFO1ADdedRHrt+sUA4j2b0usmGp73RZwbw4QbpwDATjYefX5tUq0cx3NEG9gV034JFTsVDfNZhGlKMMeawNgIh/5ok3wTeZz0e0Qkb493JJvCpx0grwAnnoQc2nmV+OCGV4aZUz/tZcmTDbM04FjY1ncnvFR/PWeylo2kdnz9CgNR8shy+aEw4zTMHTVBQlOIez8xrw8UEsNvh+4JoqQegjT9ghD4I2l9qqZCUTXd83zfdyQrJRzgupbL9lFbOrWuGIfAHNDfe69XDXiUWVj0iKSYe4urenTxwPVChUHhC1qI2I4Tz8nCIE9vPLIu4/tuENW/VlvLanxPOTC78oOjCqvALWV956Yb6s4G1rvdMoqAS9K+vrgF2fLEpQ/8hhSmy1ZXJcevJUCP2UJ3ZRiyYpMzaOf3W+gWgnI9AYojF3h8MN0=) 2025-09-20 09:12:41.565878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEPzzu/mWsubkOL1vSSPuAF1zPN0b0pOs1Nf1hVsVPy1bJsYR1Qgc8qHNWjv6FFYEkzvC+aQXoJ7qR8QmZoEjIM=) 2025-09-20 09:12:41.565890 | orchestrator | 2025-09-20 09:12:41.565903 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:41.565915 | orchestrator | Saturday 20 September 2025 09:12:38 +0000 (0:00:01.074) 0:00:09.756 **** 2025-09-20 09:12:41.566096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9k/1NAyLPaOpsrgqhX+WTqGk6hka8evDU7w0Fzsy55YwBjpMR/PufrMvWmbGxmyUSDresdCUa2tTuWfLFxyVXnc5WHhpigWmSr1JWRxXq1HRfBgVQ6jQtx4sWzPMBggukse5YB7TPQxZpi58jzwP1OjSanafilNeTSHO+ZyI6I1cciazn2nmJo6QMupDnt3xtgSp3pp0GrPwJLrt8+MBUkLcYhl1M5r+VJrwTcGKq1VaWJGtZTIw7pKjrnxmhRZXJRhj249FAqvv0Rc4Xs1R9gRmj3C/mRDQ1JKdKcHGxvypRoWGoWQr5tw6qLrrLjnZbQ4KDUvJT1d4efdehINrs1+4T1T4bCAfF1FcvRrqhjF8EVMWXYB20NKe0eZM5NUfqkfLuO+G83UCU6/K7sLoS+W5sYvbQucKmUzlasZRTD9C2pwQqGSiirtdJrYEb+PmL1k/OZ3A2Q+Jyfcg/w8skZ1HWW0PFIhd8HiJ7k2BV7UknKmq8IaiKbIWwiIEXQ4U=) 2025-09-20 09:12:41.566113 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINwd8/6Ek8RQJ0TaQdxycYxr7VHIOIRHcO/AfqqExj49) 2025-09-20 09:12:41.566125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJZ+xSC6qbOkwVc1DQgMK4wu8uXG4wcR3IIsy9S1uJsW3PWLkQ/3gqhcYgRAv05Q3tUqTq3Y+TJPE/g0bEePSqQ=) 2025-09-20 09:12:41.566136 | orchestrator | 2025-09-20 09:12:41.566147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:41.566158 | orchestrator | Saturday 20 September 2025 09:12:39 +0000 (0:00:01.046) 0:00:10.802 **** 2025-09-20 09:12:41.566169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXl5I1talujVtTxF2XpnaaIxA7fdHGoLCDcoaDlXWDqWWJXFln9qhf6w2EGl+zIel1f9v0ppnhR1YEQH/XWQG7+CK3HCH/dyFuDMzQccrE1Z3pbIYJCmlKqReohLxnHXiAP1MVaXvWu+93CG/D+WaqZQp7Tj+M/WEczXr63dm1hLveRjGqtmAH023lfw8yTVcTW20AABAKvYDYmQMZyJYfm4f2VC5YDfvXFNilXepLzRgJEuVgaBiI3BUtmNKUAQpIEOHfgWi0pETQTUWzF6+gbsrdfYnGKvzCXY61lv1C93BktXVu2jDRQPCS3L2Ck0ahNQ+GG1qyMRpclvBHW2y5QdtUQoLNIKjf9Yk3ei3nbMdiyLQmv352Kqq98/aPyY0kb49xZaKchuxCEk6+T7gqUF6VRNmkxhwiUh8hsnX24c/ovQVpIq9vJws43JCblCdSEUQoGOUM7kTxJrdTWaKVgyt+AWrYVb2RQR1pyGasocOi0fJmC36YfPKKRQteecM=) 2025-09-20 09:12:41.566181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGnQRBR6PfI5bMu2k3NEmH098j0zoZo7byQzaxqO/D/tHkqrwJlQkFDcUbLADWSJcXMBkfaJdxR5UScPuTcgZ0E=) 2025-09-20 09:12:41.566192 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJzHQV8u5cC5hele9xVbZvBDraffpMqZoLPgAUi7W98+) 2025-09-20 09:12:41.566212 | orchestrator | 2025-09-20 09:12:41.566223 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:41.566234 | orchestrator | Saturday 20 September 2025 09:12:40 +0000 (0:00:01.088) 0:00:11.890 **** 2025-09-20 09:12:41.566255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCG70qn0/+B+uSTfzWIG9zcV9IxQtVUjJMlpmcaJ5gBWqa+c7Up91ZNRQ/deVKHjZlLgGgy3ALWx1NLFXzlphIRlB0zktP/kSIAUIRQ4mkte1pHPc492wWYOwqp/K77W3MKM4xFctWmAJb2G2BWigh/vHGktqHz/xVvxN3FtvrVY0qMKcG3OkjAtpibOYlCzKOhSJDUlIbmW+JevV8W37yAdN9VpifZeLEaQJJq954MG5OuTTR8V5hRz7SiE6q/AflSg9NbEP33/BLKRxAzzytQ+sMFchWYAjCb/2KMmINxABZjaziDwZd5R5ciZopHOjlsZ5CQ9Nfl1x92iIwhzWL53ZDfcAtQ61eJEcfnMKnUXWH6K5lxTzumHCFOkOzEtneuGTK9gp2ZGiJQ/dYSVmpoKfdNdQTsnROj2kNE98XlKjdY8Ja57LX+clygSTxmHaE0LScIlwr5RzKUkdvNI9Tb4LbzHfGAqgcx7C7EkIIvMR1jZD4wq/ZoxmFncs+zU88=) 2025-09-20 09:12:53.549758 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOgXdmCkTMV8LV6U2KazaWLs+Af25NI7DPxJuCu36GNgfQBb6PSd80FcaZxpnAvdHBT6u52XYatxQfI/ftap7E=) 2025-09-20 09:12:53.549863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmcRuPFSAML34HJx4vNXrSArScAITcJHsZzhC2IYn9p) 2025-09-20 09:12:53.549880 | orchestrator | 2025-09-20 09:12:53.549894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:53.549908 | orchestrator | Saturday 20 September 2025 09:12:41 +0000 (0:00:01.092) 0:00:12.982 **** 2025-09-20 09:12:53.549921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+Jm10ma8umCHLJqk0y3WAsF86KxvuwN39vPnY68caGj43uGUPBswNpeucLuUn20sxEABECktCZg3evtrmuj7iwlPiXpN+BJ307s6aP+Nvo+0axCP8O3wssGQWuLLCIDICBRTSef1H6bYZ9gv6OgIyGmbF19mcDuu9xFTbmdsmoZjbP+4R13CWqpl7O7nh69oc/YJqHVQTkA+l65uOoMG1g86z58aTW1tE870EBIYj/yyuTzIn+GZzOkY+eZKn/d08y9Y4Xh4/qU5z0tAjsY6uDwyHvGB8huszwoi1/y4nEKI8maO3CgRS5tr8+gVklvB3Attd9K11TxlZjacbRLH6thKRGyRebLF14ym60OMmqhxR0sRYHwK4DQWKo6M/Sg2smE1f1SKNk6YJULBtt5ybnddISrFfo6J1yIvCqR2FFIMQ7Cd4aLT5wBoZfcJZDHji0kIq2uVRXSKagkq01MFWNQGLhcZe25c3FD/Jt4dGycULMCQWTW87qWzPQ5SbYqM=) 2025-09-20 09:12:53.549936 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBHe6yjmWhmlf1Zj1uzGvcSd54r1GuTq/uJNIwfKWXEDPDxRXyS80RjeLTsRgW55gChjgxjAkkGEEVZRo+WWQlc=) 2025-09-20 09:12:53.549949 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF4g6Mc6iM2Xv01GIwP6aDKYSWl1Hfvzs1kkp7a+cXfR) 2025-09-20 09:12:53.549960 | orchestrator | 2025-09-20 09:12:53.549972 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-20 09:12:53.549984 | orchestrator | Saturday 20 September 2025 09:12:42 +0000 (0:00:01.006) 0:00:13.989 **** 2025-09-20 09:12:53.549996 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-20 09:12:53.550008 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-20 09:12:53.550066 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-20 09:12:53.550079 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-20 09:12:53.550091 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-20 09:12:53.550101 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-20 09:12:53.550112 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-20 09:12:53.550123 | orchestrator | 2025-09-20 09:12:53.550134 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-20 09:12:53.550165 | orchestrator | Saturday 20 September 2025 09:12:48 +0000 (0:00:05.492) 0:00:19.482 **** 2025-09-20 09:12:53.550179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-20 09:12:53.550192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-20 09:12:53.550232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-20 09:12:53.550247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-20 09:12:53.550260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-20 09:12:53.550273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-20 09:12:53.550286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-20 09:12:53.550298 | orchestrator | 2025-09-20 09:12:53.550311 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:53.550325 | orchestrator | Saturday 20 September 2025 09:12:48 +0000 (0:00:00.175) 0:00:19.657 **** 2025-09-20 09:12:53.550338 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC8xrJQzXVRxaJisGmYNcW/musvFOO3qUClrEEbVC4jL) 2025-09-20 09:12:53.550379 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDK66i4pDD684Rpy+AuImNmEYmRyrC3wwLFh6u1QGhEITAzC5Yl7xahkx9aGQDcU3K158Jc7oQAZVIR1/buaRvnsat3M+zB+c8A7x6irosKNlKthXMsBhOLfUqtHoQ7FVuGr9qVT12txpJ7lse/AIiHEKp+7DBnrgaDtESFiStiB9jf2UUfrIFN450Whfy8AAG+/5L7Y4K1OI6qv3vBP3jGdDl71HbfGz2EoWcO2L60jPQvR5Ka1SJKpbAaMlQ+o7QskRp5c1bFk7ANDd02MZy6jg2giy66a2o3E0kPbAU0t2qMzq8G8HLTy0LaNrT3K1tnybzHDfrr+Gdz6vm9jbw4h5ZfimDI2YqvA7+LsbbFDALgy/bYY6AjZTvMpmvDPAGjVQMu2bRKJp8m2Zd9zcaa2IHrq8aeAnR0r0ElZji4gXSi5GJ0SSR/OLy4nzSxnncGbnwyBr41gyc8SRpmlN0Z5ypmITUtkgK89tkjtMUv0ezVpn8+Q8G1Eu4+C/9CsSU=) 2025-09-20 09:12:53.550394 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKGtLfwfQaIzocafvaQm9jrmAvA6QRvoPsSnlR6YvqJKaqgavSXJa6CI+51RNAWGh497JQQ4blABEGZVJq/cbUc=) 2025-09-20 09:12:53.550407 | orchestrator | 2025-09-20 09:12:53.550420 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:53.550432 | orchestrator | Saturday 20 September 2025 09:12:49 +0000 (0:00:01.038) 0:00:20.696 **** 2025-09-20 09:12:53.550445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPUvBeHU7qm3bw81juhhVeYNRYITOyOqyIo9ivL0g1KPCH3KBLEzFku14eZ6QUX8KAZ5Ajp+jkNX8p2gJPkSsd0=) 2025-09-20 09:12:53.550459 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMT/UZq3uKBSHhrLRSg07CQZdf0ZQYw+RxKzznFLvUBZNfENx2zt59Rdj+UZZrZIRofAcQS9k8qzeeJi8kbi6MxP4Aer1J9oGrD6ghIoLeUgs+QjQCRzQMa1u8fLd9gBGsyT3ekCDea1Drj9tREwqRmynnoromGy1UCPdAvZ/ySvA8vHaKgL0pHsRBDHZo153hSssQFWbitJvv6hExClv6U5bZ6ZgMpeWv5GloHwxN+XtWJCuHdHkUb8zQjl6UPoo6H6QUCzywrgY7iAItWsSf0eajzSWw+W+7MQCKX+p6+YaUUnkCVOs3SSwa/S9zlI3jI40g36SekuzsmFhrQiiLbpv1yFMTRGENcuTGV74urkSx2QDNmI4L/Hosq7FYbyGOaVh4CF4cs1QB43el74foUGVGBGDjUZyIhU7DxdpvDGBPBBa5cetUvG2IhoGrCxoFtwNUsZFWYfR9DVk2aJy/i8hbWmGggxJRoTAp3AUTnGQ8ZA9niVCb4IYYdVXhjRs=) 2025-09-20 09:12:53.550472 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOyWkgvdKxM1pm28vvhWdrRNyamnpcfU9I47GNVgiagI) 2025-09-20 09:12:53.550486 | orchestrator | 2025-09-20 09:12:53.550498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:53.550511 | orchestrator | Saturday 20 September 2025 09:12:50 +0000 (0:00:01.071) 0:00:21.768 **** 2025-09-20 09:12:53.550533 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFgrYoVlQYVivCXij7Iac6g7QUxNTv0GhFkcejbyI4jL) 2025-09-20 09:12:53.550546 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkOfg80epRi6FH+WR+mmTHXL8UdsD45yBm7oKuB7btVtvxGtJBD/eF9Tv4g73MfA9YkpD8wM7lB9j4zwOcFO1ADdedRHrt+sUA4j2b0usmGp73RZwbw4QbpwDATjYefX5tUq0cx3NEG9gV034JFTsVDfNZhGlKMMeawNgIh/5ok3wTeZz0e0Qkb493JJvCpx0grwAnnoQc2nmV+OCGV4aZUz/tZcmTDbM04FjY1ncnvFR/PWeylo2kdnz9CgNR8shy+aEw4zTMHTVBQlOIez8xrw8UEsNvh+4JoqQegjT9ghD4I2l9qqZCUTXd83zfdyQrJRzgupbL9lFbOrWuGIfAHNDfe69XDXiUWVj0iKSYe4urenTxwPVChUHhC1qI2I4Tz8nCIE9vPLIu4/tuENW/VlvLanxPOTC78oOjCqvALWV956Yb6s4G1rvdMoqAS9K+vrgF2fLEpQ/8hhSmy1ZXJcevJUCP2UJ3ZRiyYpMzaOf3W+gWgnI9AYojF3h8MN0=) 2025-09-20 09:12:53.550560 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEPzzu/mWsubkOL1vSSPuAF1zPN0b0pOs1Nf1hVsVPy1bJsYR1Qgc8qHNWjv6FFYEkzvC+aQXoJ7qR8QmZoEjIM=) 2025-09-20 09:12:53.550572 | orchestrator | 2025-09-20 09:12:53.550585 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:53.550597 | orchestrator | Saturday 20 September 2025 09:12:51 +0000 (0:00:01.104) 0:00:22.872 **** 2025-09-20 09:12:53.550609 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINwd8/6Ek8RQJ0TaQdxycYxr7VHIOIRHcO/AfqqExj49) 2025-09-20 09:12:53.550626 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9k/1NAyLPaOpsrgqhX+WTqGk6hka8evDU7w0Fzsy55YwBjpMR/PufrMvWmbGxmyUSDresdCUa2tTuWfLFxyVXnc5WHhpigWmSr1JWRxXq1HRfBgVQ6jQtx4sWzPMBggukse5YB7TPQxZpi58jzwP1OjSanafilNeTSHO+ZyI6I1cciazn2nmJo6QMupDnt3xtgSp3pp0GrPwJLrt8+MBUkLcYhl1M5r+VJrwTcGKq1VaWJGtZTIw7pKjrnxmhRZXJRhj249FAqvv0Rc4Xs1R9gRmj3C/mRDQ1JKdKcHGxvypRoWGoWQr5tw6qLrrLjnZbQ4KDUvJT1d4efdehINrs1+4T1T4bCAfF1FcvRrqhjF8EVMWXYB20NKe0eZM5NUfqkfLuO+G83UCU6/K7sLoS+W5sYvbQucKmUzlasZRTD9C2pwQqGSiirtdJrYEb+PmL1k/OZ3A2Q+Jyfcg/w8skZ1HWW0PFIhd8HiJ7k2BV7UknKmq8IaiKbIWwiIEXQ4U=) 2025-09-20 09:12:53.550648 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJZ+xSC6qbOkwVc1DQgMK4wu8uXG4wcR3IIsy9S1uJsW3PWLkQ/3gqhcYgRAv05Q3tUqTq3Y+TJPE/g0bEePSqQ=) 2025-09-20 09:12:57.969283 | orchestrator | 2025-09-20 09:12:57.970255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:57.970338 | orchestrator | Saturday 20 September 2025 09:12:53 +0000 (0:00:02.095) 0:00:24.967 **** 2025-09-20 09:12:57.970354 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGnQRBR6PfI5bMu2k3NEmH098j0zoZo7byQzaxqO/D/tHkqrwJlQkFDcUbLADWSJcXMBkfaJdxR5UScPuTcgZ0E=) 2025-09-20 09:12:57.970371 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXl5I1talujVtTxF2XpnaaIxA7fdHGoLCDcoaDlXWDqWWJXFln9qhf6w2EGl+zIel1f9v0ppnhR1YEQH/XWQG7+CK3HCH/dyFuDMzQccrE1Z3pbIYJCmlKqReohLxnHXiAP1MVaXvWu+93CG/D+WaqZQp7Tj+M/WEczXr63dm1hLveRjGqtmAH023lfw8yTVcTW20AABAKvYDYmQMZyJYfm4f2VC5YDfvXFNilXepLzRgJEuVgaBiI3BUtmNKUAQpIEOHfgWi0pETQTUWzF6+gbsrdfYnGKvzCXY61lv1C93BktXVu2jDRQPCS3L2Ck0ahNQ+GG1qyMRpclvBHW2y5QdtUQoLNIKjf9Yk3ei3nbMdiyLQmv352Kqq98/aPyY0kb49xZaKchuxCEk6+T7gqUF6VRNmkxhwiUh8hsnX24c/ovQVpIq9vJws43JCblCdSEUQoGOUM7kTxJrdTWaKVgyt+AWrYVb2RQR1pyGasocOi0fJmC36YfPKKRQteecM=) 2025-09-20 09:12:57.970385 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJzHQV8u5cC5hele9xVbZvBDraffpMqZoLPgAUi7W98+) 2025-09-20 09:12:57.970397 | orchestrator | 2025-09-20 09:12:57.970407 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:57.970417 | orchestrator | Saturday 20 September 2025 09:12:54 +0000 (0:00:01.106) 0:00:26.074 **** 2025-09-20 09:12:57.970428 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCG70qn0/+B+uSTfzWIG9zcV9IxQtVUjJMlpmcaJ5gBWqa+c7Up91ZNRQ/deVKHjZlLgGgy3ALWx1NLFXzlphIRlB0zktP/kSIAUIRQ4mkte1pHPc492wWYOwqp/K77W3MKM4xFctWmAJb2G2BWigh/vHGktqHz/xVvxN3FtvrVY0qMKcG3OkjAtpibOYlCzKOhSJDUlIbmW+JevV8W37yAdN9VpifZeLEaQJJq954MG5OuTTR8V5hRz7SiE6q/AflSg9NbEP33/BLKRxAzzytQ+sMFchWYAjCb/2KMmINxABZjaziDwZd5R5ciZopHOjlsZ5CQ9Nfl1x92iIwhzWL53ZDfcAtQ61eJEcfnMKnUXWH6K5lxTzumHCFOkOzEtneuGTK9gp2ZGiJQ/dYSVmpoKfdNdQTsnROj2kNE98XlKjdY8Ja57LX+clygSTxmHaE0LScIlwr5RzKUkdvNI9Tb4LbzHfGAqgcx7C7EkIIvMR1jZD4wq/ZoxmFncs+zU88=) 2025-09-20 09:12:57.970471 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOgXdmCkTMV8LV6U2KazaWLs+Af25NI7DPxJuCu36GNgfQBb6PSd80FcaZxpnAvdHBT6u52XYatxQfI/ftap7E=) 2025-09-20 09:12:57.970481 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmcRuPFSAML34HJx4vNXrSArScAITcJHsZzhC2IYn9p) 2025-09-20 09:12:57.970491 | orchestrator | 2025-09-20 09:12:57.970501 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-20 09:12:57.970510 | orchestrator | Saturday 20 September 2025 09:12:55 +0000 (0:00:01.112) 0:00:27.186 **** 2025-09-20 09:12:57.970520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+Jm10ma8umCHLJqk0y3WAsF86KxvuwN39vPnY68caGj43uGUPBswNpeucLuUn20sxEABECktCZg3evtrmuj7iwlPiXpN+BJ307s6aP+Nvo+0axCP8O3wssGQWuLLCIDICBRTSef1H6bYZ9gv6OgIyGmbF19mcDuu9xFTbmdsmoZjbP+4R13CWqpl7O7nh69oc/YJqHVQTkA+l65uOoMG1g86z58aTW1tE870EBIYj/yyuTzIn+GZzOkY+eZKn/d08y9Y4Xh4/qU5z0tAjsY6uDwyHvGB8huszwoi1/y4nEKI8maO3CgRS5tr8+gVklvB3Attd9K11TxlZjacbRLH6thKRGyRebLF14ym60OMmqhxR0sRYHwK4DQWKo6M/Sg2smE1f1SKNk6YJULBtt5ybnddISrFfo6J1yIvCqR2FFIMQ7Cd4aLT5wBoZfcJZDHji0kIq2uVRXSKagkq01MFWNQGLhcZe25c3FD/Jt4dGycULMCQWTW87qWzPQ5SbYqM=) 2025-09-20 09:12:57.970530 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBHe6yjmWhmlf1Zj1uzGvcSd54r1GuTq/uJNIwfKWXEDPDxRXyS80RjeLTsRgW55gChjgxjAkkGEEVZRo+WWQlc=) 2025-09-20 09:12:57.970540 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF4g6Mc6iM2Xv01GIwP6aDKYSWl1Hfvzs1kkp7a+cXfR) 2025-09-20 09:12:57.970549 | orchestrator | 2025-09-20 09:12:57.970559 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-20 09:12:57.970569 | orchestrator | Saturday 20 September 2025 09:12:56 +0000 (0:00:01.116) 0:00:28.303 **** 2025-09-20 09:12:57.970579 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-20 09:12:57.970588 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-20 09:12:57.970598 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-20 09:12:57.970607 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-20 09:12:57.970617 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-20 09:12:57.970626 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-20 09:12:57.970636 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-20 09:12:57.970646 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:12:57.970657 | orchestrator | 2025-09-20 09:12:57.970720 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-20 09:12:57.970733 | orchestrator | Saturday 20 September 2025 09:12:57 +0000 (0:00:00.167) 0:00:28.470 **** 2025-09-20 09:12:57.970743 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:12:57.970753 | orchestrator | 2025-09-20 09:12:57.970762 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-20 09:12:57.970772 | orchestrator | Saturday 20 September 2025 09:12:57 +0000 (0:00:00.080) 0:00:28.551 **** 2025-09-20 09:12:57.970782 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:12:57.970791 | orchestrator | 2025-09-20 09:12:57.970801 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-20 09:12:57.970810 | orchestrator | Saturday 20 September 2025 09:12:57 +0000 (0:00:00.060) 0:00:28.611 **** 2025-09-20 09:12:57.970828 | orchestrator | changed: [testbed-manager] 2025-09-20 09:12:57.970838 | orchestrator | 2025-09-20 09:12:57.970848 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:12:57.970858 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:12:57.970869 | orchestrator | 2025-09-20 09:12:57.970878 | orchestrator | 2025-09-20 09:12:57.970888 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:12:57.970897 | orchestrator | Saturday 20 September 2025 09:12:57 +0000 (0:00:00.534) 0:00:29.146 **** 2025-09-20 09:12:57.970907 | orchestrator | =============================================================================== 2025-09-20 09:12:57.970916 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.01s 2025-09-20 09:12:57.970926 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.49s 2025-09-20 09:12:57.970937 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.10s 2025-09-20 09:12:57.970946 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2025-09-20 09:12:57.970971 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-20 09:12:57.970981 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-20 09:12:57.970991 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-20 09:12:57.971000 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-20 09:12:57.971010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-20 09:12:57.971019 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-20 09:12:57.971029 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-20 09:12:57.971038 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-20 09:12:57.971048 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-20 09:12:57.971058 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-20 09:12:57.971067 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-20 09:12:57.971076 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-20 09:12:57.971086 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.53s 2025-09-20 09:12:57.971095 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.20s 2025-09-20 09:12:57.971106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2025-09-20 09:12:57.971120 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-20 09:12:58.274297 | orchestrator | + osism apply squid 2025-09-20 09:13:10.380913 | orchestrator | 2025-09-20 09:13:10 | INFO  | Task 08c3395a-c832-4a15-943c-b397f2f8e0d0 (squid) was prepared for execution. 2025-09-20 09:13:10.381810 | orchestrator | 2025-09-20 09:13:10 | INFO  | It takes a moment until task 08c3395a-c832-4a15-943c-b397f2f8e0d0 (squid) has been started and output is visible here. 2025-09-20 09:15:03.856868 | orchestrator | 2025-09-20 09:15:03.856980 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-20 09:15:03.856997 | orchestrator | 2025-09-20 09:15:03.857008 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-20 09:15:03.857019 | orchestrator | Saturday 20 September 2025 09:13:13 +0000 (0:00:00.149) 0:00:00.149 **** 2025-09-20 09:15:03.857030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 09:15:03.857041 | orchestrator | 2025-09-20 09:15:03.857051 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-20 09:15:03.857086 | orchestrator | Saturday 20 September 2025 09:13:14 +0000 (0:00:00.076) 0:00:00.226 **** 2025-09-20 09:15:03.857097 | orchestrator | ok: [testbed-manager] 2025-09-20 09:15:03.857108 | orchestrator | 2025-09-20 09:15:03.857118 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-20 09:15:03.857128 | orchestrator | Saturday 20 September 2025 09:13:15 +0000 (0:00:01.396) 0:00:01.622 **** 2025-09-20 09:15:03.857138 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-20 09:15:03.857148 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-20 09:15:03.857158 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-20 09:15:03.857168 | orchestrator | 2025-09-20 09:15:03.857178 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-20 09:15:03.857187 | orchestrator | Saturday 20 September 2025 09:13:16 +0000 (0:00:01.151) 0:00:02.774 **** 2025-09-20 09:15:03.857197 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-20 09:15:03.857207 | orchestrator | 2025-09-20 09:15:03.857217 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-20 09:15:03.857227 | orchestrator | Saturday 20 September 2025 09:13:17 +0000 (0:00:01.072) 0:00:03.846 **** 2025-09-20 09:15:03.857236 | orchestrator | ok: [testbed-manager] 2025-09-20 09:15:03.857246 | orchestrator | 2025-09-20 09:15:03.857256 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-20 09:15:03.857266 | orchestrator | Saturday 20 September 2025 09:13:18 +0000 (0:00:00.391) 0:00:04.238 **** 2025-09-20 09:15:03.857276 | orchestrator | changed: [testbed-manager] 2025-09-20 09:15:03.857286 | orchestrator | 2025-09-20 09:15:03.857295 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-20 09:15:03.857305 | orchestrator | Saturday 20 September 2025 09:13:18 +0000 (0:00:00.925) 0:00:05.163 **** 2025-09-20 09:15:03.857315 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-20 09:15:03.857325 | orchestrator | ok: [testbed-manager] 2025-09-20 09:15:03.857335 | orchestrator | 2025-09-20 09:15:03.857345 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-20 09:15:03.857354 | orchestrator | Saturday 20 September 2025 09:13:50 +0000 (0:00:31.594) 0:00:36.758 **** 2025-09-20 09:15:03.857366 | orchestrator | changed: [testbed-manager] 2025-09-20 09:15:03.857377 | orchestrator | 2025-09-20 09:15:03.857389 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-20 09:15:03.857400 | orchestrator | Saturday 20 September 2025 09:14:02 +0000 (0:00:12.180) 0:00:48.939 **** 2025-09-20 09:15:03.857412 | orchestrator | Pausing for 60 seconds 2025-09-20 09:15:03.857424 | orchestrator | changed: [testbed-manager] 2025-09-20 09:15:03.857435 | orchestrator | 2025-09-20 09:15:03.857447 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-20 09:15:03.857459 | orchestrator | Saturday 20 September 2025 09:15:02 +0000 (0:01:00.090) 0:01:49.030 **** 2025-09-20 09:15:03.857470 | orchestrator | ok: [testbed-manager] 2025-09-20 09:15:03.857482 | orchestrator | 2025-09-20 09:15:03.857493 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-20 09:15:03.857504 | orchestrator | Saturday 20 September 2025 09:15:02 +0000 (0:00:00.083) 0:01:49.114 **** 2025-09-20 09:15:03.857515 | orchestrator | changed: [testbed-manager] 2025-09-20 09:15:03.857526 | orchestrator | 2025-09-20 09:15:03.857538 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:15:03.857549 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:15:03.857560 | orchestrator | 2025-09-20 09:15:03.857571 | orchestrator | 2025-09-20 09:15:03.857583 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:15:03.857594 | orchestrator | Saturday 20 September 2025 09:15:03 +0000 (0:00:00.671) 0:01:49.785 **** 2025-09-20 09:15:03.857612 | orchestrator | =============================================================================== 2025-09-20 09:15:03.857624 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2025-09-20 09:15:03.857635 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.60s 2025-09-20 09:15:03.857646 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.18s 2025-09-20 09:15:03.857657 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.40s 2025-09-20 09:15:03.857668 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-09-20 09:15:03.857680 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-09-20 09:15:03.857692 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.93s 2025-09-20 09:15:03.857703 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.67s 2025-09-20 09:15:03.857738 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2025-09-20 09:15:03.857750 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-09-20 09:15:03.857760 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-20 09:15:04.169530 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 09:15:04.169607 | orchestrator | ++ semver latest 9.0.0 2025-09-20 09:15:04.222562 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-20 09:15:04.222600 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 09:15:04.223769 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-20 09:15:16.012660 | orchestrator | 2025-09-20 09:15:16 | INFO  | Task 33c09c55-7cbf-4d44-a1fa-2355f137e19a (operator) was prepared for execution. 2025-09-20 09:15:16.012783 | orchestrator | 2025-09-20 09:15:16 | INFO  | It takes a moment until task 33c09c55-7cbf-4d44-a1fa-2355f137e19a (operator) has been started and output is visible here. 2025-09-20 09:15:31.268507 | orchestrator | 2025-09-20 09:15:31.268614 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-20 09:15:31.268631 | orchestrator | 2025-09-20 09:15:31.268644 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 09:15:31.268655 | orchestrator | Saturday 20 September 2025 09:15:19 +0000 (0:00:00.148) 0:00:00.148 **** 2025-09-20 09:15:31.268667 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:15:31.268679 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:15:31.268691 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:15:31.268702 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:15:31.268713 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:15:31.268804 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:15:31.268817 | orchestrator | 2025-09-20 09:15:31.268829 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-20 09:15:31.268840 | orchestrator | Saturday 20 September 2025 09:15:22 +0000 (0:00:03.192) 0:00:03.340 **** 2025-09-20 09:15:31.268851 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:15:31.268862 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:15:31.268873 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:15:31.268884 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:15:31.268895 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:15:31.268906 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:15:31.268917 | orchestrator | 2025-09-20 09:15:31.268928 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-20 09:15:31.268939 | orchestrator | 2025-09-20 09:15:31.268950 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-20 09:15:31.268961 | orchestrator | Saturday 20 September 2025 09:15:23 +0000 (0:00:00.744) 0:00:04.085 **** 2025-09-20 09:15:31.268973 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:15:31.268984 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:15:31.268994 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:15:31.269005 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:15:31.269016 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:15:31.269027 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:15:31.269062 | orchestrator | 2025-09-20 09:15:31.269073 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-20 09:15:31.269084 | orchestrator | Saturday 20 September 2025 09:15:23 +0000 (0:00:00.187) 0:00:04.272 **** 2025-09-20 09:15:31.269095 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:15:31.269106 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:15:31.269117 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:15:31.269127 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:15:31.269138 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:15:31.269149 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:15:31.269160 | orchestrator | 2025-09-20 09:15:31.269171 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-20 09:15:31.269182 | orchestrator | Saturday 20 September 2025 09:15:23 +0000 (0:00:00.150) 0:00:04.422 **** 2025-09-20 09:15:31.269193 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:15:31.269205 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:15:31.269216 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:15:31.269226 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:15:31.269237 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:15:31.269249 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:15:31.269260 | orchestrator | 2025-09-20 09:15:31.269271 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-20 09:15:31.269282 | orchestrator | Saturday 20 September 2025 09:15:24 +0000 (0:00:00.573) 0:00:04.996 **** 2025-09-20 09:15:31.269293 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:15:31.269303 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:15:31.269314 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:15:31.269325 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:15:31.269336 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:15:31.269347 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:15:31.269358 | orchestrator | 2025-09-20 09:15:31.269368 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-20 09:15:31.269379 | orchestrator | Saturday 20 September 2025 09:15:25 +0000 (0:00:00.793) 0:00:05.790 **** 2025-09-20 09:15:31.269391 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-20 09:15:31.269402 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-20 09:15:31.269413 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-20 09:15:31.269424 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-20 09:15:31.269435 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-20 09:15:31.269445 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-20 09:15:31.269456 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-20 09:15:31.269467 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-20 09:15:31.269478 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-20 09:15:31.269488 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-20 09:15:31.269499 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-20 09:15:31.269510 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-20 09:15:31.269521 | orchestrator | 2025-09-20 09:15:31.269531 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-20 09:15:31.269548 | orchestrator | Saturday 20 September 2025 09:15:26 +0000 (0:00:01.163) 0:00:06.953 **** 2025-09-20 09:15:31.269559 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:15:31.269570 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:15:31.269581 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:15:31.269592 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:15:31.269602 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:15:31.269613 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:15:31.269624 | orchestrator | 2025-09-20 09:15:31.269635 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-20 09:15:31.269646 | orchestrator | Saturday 20 September 2025 09:15:27 +0000 (0:00:01.268) 0:00:08.222 **** 2025-09-20 09:15:31.269657 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-20 09:15:31.269675 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-20 09:15:31.269686 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-20 09:15:31.269698 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:15:31.269765 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:15:31.269779 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:15:31.269790 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:15:31.269800 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:15:31.269811 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-20 09:15:31.269822 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-20 09:15:31.269833 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-20 09:15:31.269844 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-20 09:15:31.269854 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-20 09:15:31.269865 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-20 09:15:31.269876 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-20 09:15:31.269886 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:15:31.269897 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:15:31.269908 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:15:31.269919 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:15:31.269930 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:15:31.269940 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-20 09:15:31.269951 | orchestrator | 2025-09-20 09:15:31.269962 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-20 09:15:31.269973 | orchestrator | Saturday 20 September 2025 09:15:29 +0000 (0:00:01.264) 0:00:09.486 **** 2025-09-20 09:15:31.269984 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:15:31.269995 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:15:31.270006 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:15:31.270083 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:15:31.270098 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:15:31.270108 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:15:31.270119 | orchestrator | 2025-09-20 09:15:31.270130 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-20 09:15:31.270141 | orchestrator | Saturday 20 September 2025 09:15:29 +0000 (0:00:00.176) 0:00:09.663 **** 2025-09-20 09:15:31.270152 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:15:31.270163 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:15:31.270174 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:15:31.270185 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:15:31.270196 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:15:31.270206 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:15:31.270217 | orchestrator | 2025-09-20 09:15:31.270228 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-20 09:15:31.270239 | orchestrator | Saturday 20 September 2025 09:15:29 +0000 (0:00:00.686) 0:00:10.350 **** 2025-09-20 09:15:31.270250 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:15:31.270261 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:15:31.270272 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:15:31.270283 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:15:31.270294 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:15:31.270305 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:15:31.270316 | orchestrator | 2025-09-20 09:15:31.270335 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-20 09:15:31.270346 | orchestrator | Saturday 20 September 2025 09:15:30 +0000 (0:00:00.213) 0:00:10.563 **** 2025-09-20 09:15:31.270357 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-20 09:15:31.270373 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:15:31.270384 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 09:15:31.270395 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:15:31.270406 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:15:31.270417 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 09:15:31.270428 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 09:15:31.270438 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:15:31.270449 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:15:31.270460 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:15:31.270471 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-20 09:15:31.270482 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:15:31.270493 | orchestrator | 2025-09-20 09:15:31.270504 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-20 09:15:31.270515 | orchestrator | Saturday 20 September 2025 09:15:30 +0000 (0:00:00.686) 0:00:11.249 **** 2025-09-20 09:15:31.270526 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:15:31.270537 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:15:31.270547 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:15:31.270558 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:15:31.270569 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:15:31.270580 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:15:31.270591 | orchestrator | 2025-09-20 09:15:31.270602 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-20 09:15:31.270613 | orchestrator | Saturday 20 September 2025 09:15:30 +0000 (0:00:00.156) 0:00:11.406 **** 2025-09-20 09:15:31.270624 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:15:31.270635 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:15:31.270645 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:15:31.270656 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:15:31.270673 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:15:31.270685 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:15:31.270696 | orchestrator | 2025-09-20 09:15:31.270707 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-20 09:15:31.270738 | orchestrator | Saturday 20 September 2025 09:15:31 +0000 (0:00:00.153) 0:00:11.559 **** 2025-09-20 09:15:31.270752 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:15:31.270763 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:15:31.270774 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:15:31.270785 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:15:31.270805 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:15:32.393260 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:15:32.393363 | orchestrator | 2025-09-20 09:15:32.393379 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-20 09:15:32.393392 | orchestrator | Saturday 20 September 2025 09:15:31 +0000 (0:00:00.157) 0:00:11.717 **** 2025-09-20 09:15:32.393403 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:15:32.393414 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:15:32.393425 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:15:32.393436 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:15:32.393448 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:15:32.393459 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:15:32.393489 | orchestrator | 2025-09-20 09:15:32.393501 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-20 09:15:32.393513 | orchestrator | Saturday 20 September 2025 09:15:31 +0000 (0:00:00.623) 0:00:12.340 **** 2025-09-20 09:15:32.393534 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:15:32.393546 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:15:32.393557 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:15:32.393593 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:15:32.393605 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:15:32.393616 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:15:32.393626 | orchestrator | 2025-09-20 09:15:32.393637 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:15:32.393649 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:15:32.393662 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:15:32.393672 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:15:32.393683 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:15:32.393694 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:15:32.393705 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:15:32.393715 | orchestrator | 2025-09-20 09:15:32.393767 | orchestrator | 2025-09-20 09:15:32.393779 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:15:32.393790 | orchestrator | Saturday 20 September 2025 09:15:32 +0000 (0:00:00.224) 0:00:12.565 **** 2025-09-20 09:15:32.393801 | orchestrator | =============================================================================== 2025-09-20 09:15:32.393814 | orchestrator | Gathering Facts --------------------------------------------------------- 3.19s 2025-09-20 09:15:32.393826 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-09-20 09:15:32.393838 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-09-20 09:15:32.393852 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-09-20 09:15:32.393864 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2025-09-20 09:15:32.393877 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-09-20 09:15:32.393889 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.69s 2025-09-20 09:15:32.393901 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-09-20 09:15:32.393913 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-09-20 09:15:32.393925 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.57s 2025-09-20 09:15:32.393938 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-09-20 09:15:32.393950 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.21s 2025-09-20 09:15:32.393962 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2025-09-20 09:15:32.393974 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-09-20 09:15:32.394001 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-09-20 09:15:32.394075 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-20 09:15:32.394091 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-09-20 09:15:32.394103 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-09-20 09:15:32.702226 | orchestrator | + osism apply --environment custom facts 2025-09-20 09:15:34.551264 | orchestrator | 2025-09-20 09:15:34 | INFO  | Trying to run play facts in environment custom 2025-09-20 09:15:44.718154 | orchestrator | 2025-09-20 09:15:44 | INFO  | Task 4ff568cd-42f5-479f-8419-6c84a745dfdd (facts) was prepared for execution. 2025-09-20 09:15:44.718265 | orchestrator | 2025-09-20 09:15:44 | INFO  | It takes a moment until task 4ff568cd-42f5-479f-8419-6c84a745dfdd (facts) has been started and output is visible here. 2025-09-20 09:16:26.807317 | orchestrator | 2025-09-20 09:16:26.807428 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-20 09:16:26.807444 | orchestrator | 2025-09-20 09:16:26.807455 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-20 09:16:26.807466 | orchestrator | Saturday 20 September 2025 09:15:48 +0000 (0:00:00.079) 0:00:00.079 **** 2025-09-20 09:16:26.807476 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:26.807487 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:16:26.807498 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:26.807507 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:26.807517 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:16:26.807527 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:16:26.807536 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:26.807546 | orchestrator | 2025-09-20 09:16:26.807556 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-20 09:16:26.807566 | orchestrator | Saturday 20 September 2025 09:15:49 +0000 (0:00:01.308) 0:00:01.388 **** 2025-09-20 09:16:26.807575 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:26.807585 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:26.807595 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:26.807605 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:16:26.807614 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:26.807624 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:16:26.807634 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:16:26.807643 | orchestrator | 2025-09-20 09:16:26.807653 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-20 09:16:26.807663 | orchestrator | 2025-09-20 09:16:26.807672 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-20 09:16:26.807682 | orchestrator | Saturday 20 September 2025 09:15:50 +0000 (0:00:01.127) 0:00:02.516 **** 2025-09-20 09:16:26.807692 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.807702 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.807712 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.807721 | orchestrator | 2025-09-20 09:16:26.807731 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-20 09:16:26.807800 | orchestrator | Saturday 20 September 2025 09:15:50 +0000 (0:00:00.103) 0:00:02.620 **** 2025-09-20 09:16:26.807811 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.807821 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.807830 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.807840 | orchestrator | 2025-09-20 09:16:26.807850 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-20 09:16:26.807862 | orchestrator | Saturday 20 September 2025 09:15:51 +0000 (0:00:00.193) 0:00:02.813 **** 2025-09-20 09:16:26.807873 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.807884 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.807895 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.807906 | orchestrator | 2025-09-20 09:16:26.807917 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-20 09:16:26.807928 | orchestrator | Saturday 20 September 2025 09:15:51 +0000 (0:00:00.173) 0:00:02.986 **** 2025-09-20 09:16:26.807941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:16:26.807952 | orchestrator | 2025-09-20 09:16:26.807963 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-20 09:16:26.807975 | orchestrator | Saturday 20 September 2025 09:15:51 +0000 (0:00:00.131) 0:00:03.118 **** 2025-09-20 09:16:26.808010 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.808021 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.808032 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.808043 | orchestrator | 2025-09-20 09:16:26.808054 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-20 09:16:26.808065 | orchestrator | Saturday 20 September 2025 09:15:51 +0000 (0:00:00.411) 0:00:03.529 **** 2025-09-20 09:16:26.808076 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:16:26.808087 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:16:26.808098 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:16:26.808109 | orchestrator | 2025-09-20 09:16:26.808121 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-20 09:16:26.808132 | orchestrator | Saturday 20 September 2025 09:15:51 +0000 (0:00:00.090) 0:00:03.620 **** 2025-09-20 09:16:26.808143 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:26.808154 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:26.808164 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:26.808176 | orchestrator | 2025-09-20 09:16:26.808187 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-20 09:16:26.808198 | orchestrator | Saturday 20 September 2025 09:15:52 +0000 (0:00:01.027) 0:00:04.647 **** 2025-09-20 09:16:26.808210 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.808220 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.808229 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.808239 | orchestrator | 2025-09-20 09:16:26.808249 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-20 09:16:26.808259 | orchestrator | Saturday 20 September 2025 09:15:53 +0000 (0:00:00.479) 0:00:05.127 **** 2025-09-20 09:16:26.808269 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:26.808279 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:26.808289 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:26.808299 | orchestrator | 2025-09-20 09:16:26.808309 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-20 09:16:26.808319 | orchestrator | Saturday 20 September 2025 09:15:54 +0000 (0:00:01.045) 0:00:06.173 **** 2025-09-20 09:16:26.808328 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:26.808338 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:26.808348 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:26.808357 | orchestrator | 2025-09-20 09:16:26.808367 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-20 09:16:26.808394 | orchestrator | Saturday 20 September 2025 09:16:11 +0000 (0:00:16.580) 0:00:22.753 **** 2025-09-20 09:16:26.808404 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:16:26.808414 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:16:26.808424 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:16:26.808433 | orchestrator | 2025-09-20 09:16:26.808443 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-20 09:16:26.808469 | orchestrator | Saturday 20 September 2025 09:16:11 +0000 (0:00:00.125) 0:00:22.879 **** 2025-09-20 09:16:26.808480 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:26.808489 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:26.808499 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:26.808509 | orchestrator | 2025-09-20 09:16:26.808519 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-20 09:16:26.808528 | orchestrator | Saturday 20 September 2025 09:16:17 +0000 (0:00:06.787) 0:00:29.666 **** 2025-09-20 09:16:26.808538 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.808548 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.808558 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.808567 | orchestrator | 2025-09-20 09:16:26.808577 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-20 09:16:26.808587 | orchestrator | Saturday 20 September 2025 09:16:18 +0000 (0:00:00.422) 0:00:30.089 **** 2025-09-20 09:16:26.808597 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-20 09:16:26.808614 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-20 09:16:26.808624 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-20 09:16:26.808634 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-20 09:16:26.808643 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-20 09:16:26.808653 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-20 09:16:26.808663 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-20 09:16:26.808672 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-20 09:16:26.808682 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-20 09:16:26.808692 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-20 09:16:26.808701 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-20 09:16:26.808711 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-20 09:16:26.808721 | orchestrator | 2025-09-20 09:16:26.808731 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-20 09:16:26.808761 | orchestrator | Saturday 20 September 2025 09:16:21 +0000 (0:00:03.475) 0:00:33.565 **** 2025-09-20 09:16:26.808771 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.808781 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.808790 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.808800 | orchestrator | 2025-09-20 09:16:26.808810 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 09:16:26.808819 | orchestrator | 2025-09-20 09:16:26.808829 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 09:16:26.808839 | orchestrator | Saturday 20 September 2025 09:16:23 +0000 (0:00:01.201) 0:00:34.767 **** 2025-09-20 09:16:26.808849 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:16:26.808858 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:16:26.808868 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:26.808878 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:16:26.808887 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:26.808897 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:26.808906 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:26.808916 | orchestrator | 2025-09-20 09:16:26.808925 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:16:26.808936 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:16:26.808946 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:16:26.808957 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:16:26.808967 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:16:26.808976 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:16:26.808987 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:16:26.809002 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:16:26.809012 | orchestrator | 2025-09-20 09:16:26.809022 | orchestrator | 2025-09-20 09:16:26.809032 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:16:26.809041 | orchestrator | Saturday 20 September 2025 09:16:26 +0000 (0:00:03.714) 0:00:38.481 **** 2025-09-20 09:16:26.809051 | orchestrator | =============================================================================== 2025-09-20 09:16:26.809067 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.58s 2025-09-20 09:16:26.809077 | orchestrator | Install required packages (Debian) -------------------------------------- 6.79s 2025-09-20 09:16:26.809086 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2025-09-20 09:16:26.809096 | orchestrator | Copy fact files --------------------------------------------------------- 3.48s 2025-09-20 09:16:26.809106 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2025-09-20 09:16:26.809115 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.20s 2025-09-20 09:16:26.809131 | orchestrator | Copy fact file ---------------------------------------------------------- 1.13s 2025-09-20 09:16:26.951602 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-09-20 09:16:26.951687 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-09-20 09:16:26.951700 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-09-20 09:16:26.951710 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-09-20 09:16:26.951720 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-09-20 09:16:26.951730 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-09-20 09:16:26.951789 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-09-20 09:16:26.951799 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-09-20 09:16:26.951810 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-09-20 09:16:26.951820 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-09-20 09:16:26.951830 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-09-20 09:16:27.160391 | orchestrator | + osism apply bootstrap 2025-09-20 09:16:38.898701 | orchestrator | 2025-09-20 09:16:38 | INFO  | Task cb5e5178-70f6-4a1f-8ec0-6c2b75b99afc (bootstrap) was prepared for execution. 2025-09-20 09:16:38.898844 | orchestrator | 2025-09-20 09:16:38 | INFO  | It takes a moment until task cb5e5178-70f6-4a1f-8ec0-6c2b75b99afc (bootstrap) has been started and output is visible here. 2025-09-20 09:16:54.402808 | orchestrator | 2025-09-20 09:16:54.402917 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-20 09:16:54.402933 | orchestrator | 2025-09-20 09:16:54.402946 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-20 09:16:54.402957 | orchestrator | Saturday 20 September 2025 09:16:43 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-20 09:16:54.402968 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:54.402981 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:16:54.402992 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:16:54.403003 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:16:54.403014 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:54.403024 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:54.403035 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:54.403046 | orchestrator | 2025-09-20 09:16:54.403057 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 09:16:54.403068 | orchestrator | 2025-09-20 09:16:54.403079 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 09:16:54.403090 | orchestrator | Saturday 20 September 2025 09:16:43 +0000 (0:00:00.258) 0:00:00.425 **** 2025-09-20 09:16:54.403100 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:16:54.403111 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:16:54.403122 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:16:54.403133 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:54.403143 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:54.403154 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:54.403165 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:54.403203 | orchestrator | 2025-09-20 09:16:54.403215 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-20 09:16:54.403226 | orchestrator | 2025-09-20 09:16:54.403237 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 09:16:54.403248 | orchestrator | Saturday 20 September 2025 09:16:46 +0000 (0:00:03.541) 0:00:03.967 **** 2025-09-20 09:16:54.403259 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-20 09:16:54.403271 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-20 09:16:54.403281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-20 09:16:54.403292 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 09:16:54.403305 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-20 09:16:54.403318 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-20 09:16:54.403331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 09:16:54.403343 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-20 09:16:54.403356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-20 09:16:54.403369 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-20 09:16:54.403381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 09:16:54.403394 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-20 09:16:54.403406 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-20 09:16:54.403419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-20 09:16:54.403432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-20 09:16:54.403444 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-20 09:16:54.403457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-20 09:16:54.403469 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-20 09:16:54.403481 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-20 09:16:54.403494 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:16:54.403507 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-20 09:16:54.403519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 09:16:54.403531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-20 09:16:54.403544 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-20 09:16:54.403557 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-20 09:16:54.403570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-20 09:16:54.403582 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-20 09:16:54.403594 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-20 09:16:54.403606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 09:16:54.403619 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-20 09:16:54.403631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-20 09:16:54.403643 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-20 09:16:54.403654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-20 09:16:54.403665 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:16:54.403675 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-20 09:16:54.403686 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-20 09:16:54.403697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 09:16:54.403708 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-20 09:16:54.403718 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:16:54.403729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-20 09:16:54.403775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:16:54.403796 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-20 09:16:54.403807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-20 09:16:54.403818 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:16:54.403829 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-20 09:16:54.403840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:16:54.403867 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-20 09:16:54.403879 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-20 09:16:54.403890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:16:54.403901 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:16:54.403912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-20 09:16:54.403923 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:16:54.403934 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-20 09:16:54.403944 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-20 09:16:54.403955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-20 09:16:54.403966 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:16:54.403977 | orchestrator | 2025-09-20 09:16:54.403988 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-20 09:16:54.403999 | orchestrator | 2025-09-20 09:16:54.404010 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-20 09:16:54.404020 | orchestrator | Saturday 20 September 2025 09:16:47 +0000 (0:00:00.463) 0:00:04.431 **** 2025-09-20 09:16:54.404031 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:16:54.404042 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:54.404053 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:16:54.404064 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:16:54.404074 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:54.404085 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:54.404096 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:54.404107 | orchestrator | 2025-09-20 09:16:54.404118 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-20 09:16:54.404129 | orchestrator | Saturday 20 September 2025 09:16:48 +0000 (0:00:01.188) 0:00:05.620 **** 2025-09-20 09:16:54.404140 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:54.404151 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:16:54.404161 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:16:54.404172 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:16:54.404183 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:16:54.404193 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:16:54.404204 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:16:54.404215 | orchestrator | 2025-09-20 09:16:54.404226 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-20 09:16:54.404237 | orchestrator | Saturday 20 September 2025 09:16:49 +0000 (0:00:01.235) 0:00:06.855 **** 2025-09-20 09:16:54.404249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:16:54.404262 | orchestrator | 2025-09-20 09:16:54.404273 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-20 09:16:54.404284 | orchestrator | Saturday 20 September 2025 09:16:49 +0000 (0:00:00.255) 0:00:07.110 **** 2025-09-20 09:16:54.404295 | orchestrator | changed: [testbed-manager] 2025-09-20 09:16:54.404305 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:16:54.404321 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:16:54.404332 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:16:54.404343 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:54.404354 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:54.404364 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:54.404375 | orchestrator | 2025-09-20 09:16:54.404393 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-20 09:16:54.404404 | orchestrator | Saturday 20 September 2025 09:16:51 +0000 (0:00:02.001) 0:00:09.111 **** 2025-09-20 09:16:54.404415 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:16:54.404427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:16:54.404440 | orchestrator | 2025-09-20 09:16:54.404451 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-20 09:16:54.404462 | orchestrator | Saturday 20 September 2025 09:16:52 +0000 (0:00:00.272) 0:00:09.384 **** 2025-09-20 09:16:54.404473 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:16:54.404484 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:16:54.404494 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:16:54.404505 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:54.404516 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:54.404526 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:54.404537 | orchestrator | 2025-09-20 09:16:54.404548 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-20 09:16:54.404559 | orchestrator | Saturday 20 September 2025 09:16:53 +0000 (0:00:01.026) 0:00:10.411 **** 2025-09-20 09:16:54.404570 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:16:54.404580 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:16:54.404591 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:16:54.404602 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:16:54.404613 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:16:54.404623 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:16:54.404634 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:16:54.404645 | orchestrator | 2025-09-20 09:16:54.404656 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-20 09:16:54.404666 | orchestrator | Saturday 20 September 2025 09:16:53 +0000 (0:00:00.589) 0:00:11.001 **** 2025-09-20 09:16:54.404677 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:16:54.404688 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:16:54.404699 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:16:54.404709 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:16:54.404720 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:16:54.404731 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:16:54.404741 | orchestrator | ok: [testbed-manager] 2025-09-20 09:16:54.404767 | orchestrator | 2025-09-20 09:16:54.404779 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-20 09:16:54.404791 | orchestrator | Saturday 20 September 2025 09:16:54 +0000 (0:00:00.422) 0:00:11.424 **** 2025-09-20 09:16:54.404802 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:16:54.404812 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:16:54.404830 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:17:06.184997 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:17:06.185107 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:17:06.185120 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:17:06.185131 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:17:06.185141 | orchestrator | 2025-09-20 09:17:06.185153 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-20 09:17:06.185164 | orchestrator | Saturday 20 September 2025 09:16:54 +0000 (0:00:00.213) 0:00:11.637 **** 2025-09-20 09:17:06.185177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:06.185204 | orchestrator | 2025-09-20 09:17:06.185214 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-20 09:17:06.185225 | orchestrator | Saturday 20 September 2025 09:16:54 +0000 (0:00:00.298) 0:00:11.936 **** 2025-09-20 09:17:06.185258 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:06.185268 | orchestrator | 2025-09-20 09:17:06.185278 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-20 09:17:06.185288 | orchestrator | Saturday 20 September 2025 09:16:55 +0000 (0:00:00.320) 0:00:12.256 **** 2025-09-20 09:17:06.185297 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.185308 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.185318 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.185328 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.185337 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.185347 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.185356 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.185366 | orchestrator | 2025-09-20 09:17:06.185376 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-20 09:17:06.185385 | orchestrator | Saturday 20 September 2025 09:16:56 +0000 (0:00:01.373) 0:00:13.630 **** 2025-09-20 09:17:06.185395 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:17:06.185405 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:17:06.185414 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:17:06.185424 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:17:06.185433 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:17:06.185443 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:17:06.185453 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:17:06.185462 | orchestrator | 2025-09-20 09:17:06.185472 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-20 09:17:06.185482 | orchestrator | Saturday 20 September 2025 09:16:56 +0000 (0:00:00.228) 0:00:13.858 **** 2025-09-20 09:17:06.185492 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.185501 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.185511 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.185521 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.185530 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.185540 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.185551 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.185562 | orchestrator | 2025-09-20 09:17:06.185574 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-20 09:17:06.185585 | orchestrator | Saturday 20 September 2025 09:16:57 +0000 (0:00:00.595) 0:00:14.454 **** 2025-09-20 09:17:06.185596 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:17:06.185607 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:17:06.185619 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:17:06.185630 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:17:06.185641 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:17:06.185652 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:17:06.185663 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:17:06.185674 | orchestrator | 2025-09-20 09:17:06.185685 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-20 09:17:06.185698 | orchestrator | Saturday 20 September 2025 09:16:57 +0000 (0:00:00.261) 0:00:14.716 **** 2025-09-20 09:17:06.185708 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.185720 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:06.185731 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:06.185742 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:06.185775 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:06.185786 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:06.185798 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:06.185809 | orchestrator | 2025-09-20 09:17:06.185821 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-20 09:17:06.185833 | orchestrator | Saturday 20 September 2025 09:16:58 +0000 (0:00:00.548) 0:00:15.264 **** 2025-09-20 09:17:06.185851 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.185862 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:06.185874 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:06.185885 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:06.185897 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:06.185908 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:06.185919 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:06.185929 | orchestrator | 2025-09-20 09:17:06.185938 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-20 09:17:06.185948 | orchestrator | Saturday 20 September 2025 09:16:59 +0000 (0:00:01.124) 0:00:16.389 **** 2025-09-20 09:17:06.185957 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.185967 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.185976 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.185986 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.185996 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.186005 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.186072 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.186084 | orchestrator | 2025-09-20 09:17:06.186094 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-20 09:17:06.186104 | orchestrator | Saturday 20 September 2025 09:17:00 +0000 (0:00:01.141) 0:00:17.531 **** 2025-09-20 09:17:06.186130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:06.186140 | orchestrator | 2025-09-20 09:17:06.186151 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-20 09:17:06.186161 | orchestrator | Saturday 20 September 2025 09:17:00 +0000 (0:00:00.406) 0:00:17.938 **** 2025-09-20 09:17:06.186170 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:17:06.186180 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:06.186189 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:06.186199 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:06.186209 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:06.186218 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:06.186227 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:06.186237 | orchestrator | 2025-09-20 09:17:06.186247 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-20 09:17:06.186256 | orchestrator | Saturday 20 September 2025 09:17:01 +0000 (0:00:01.180) 0:00:19.118 **** 2025-09-20 09:17:06.186266 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.186275 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.186285 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.186295 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.186304 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.186314 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.186323 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.186333 | orchestrator | 2025-09-20 09:17:06.186342 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-20 09:17:06.186352 | orchestrator | Saturday 20 September 2025 09:17:02 +0000 (0:00:00.238) 0:00:19.356 **** 2025-09-20 09:17:06.186362 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.186371 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.186380 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.186390 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.186399 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.186409 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.186419 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.186428 | orchestrator | 2025-09-20 09:17:06.186438 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-20 09:17:06.186447 | orchestrator | Saturday 20 September 2025 09:17:02 +0000 (0:00:00.260) 0:00:19.617 **** 2025-09-20 09:17:06.186457 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.186467 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.186483 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.186493 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.186502 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.186512 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.186521 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.186531 | orchestrator | 2025-09-20 09:17:06.186540 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-20 09:17:06.186590 | orchestrator | Saturday 20 September 2025 09:17:02 +0000 (0:00:00.217) 0:00:19.834 **** 2025-09-20 09:17:06.186606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:06.186618 | orchestrator | 2025-09-20 09:17:06.186628 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-20 09:17:06.186638 | orchestrator | Saturday 20 September 2025 09:17:02 +0000 (0:00:00.321) 0:00:20.156 **** 2025-09-20 09:17:06.186647 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.186657 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.186667 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.186676 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.186686 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.186695 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.186705 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.186715 | orchestrator | 2025-09-20 09:17:06.186724 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-20 09:17:06.186734 | orchestrator | Saturday 20 September 2025 09:17:03 +0000 (0:00:00.513) 0:00:20.670 **** 2025-09-20 09:17:06.186744 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:17:06.186771 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:17:06.186781 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:17:06.186791 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:17:06.186801 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:17:06.186810 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:17:06.186820 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:17:06.186830 | orchestrator | 2025-09-20 09:17:06.186839 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-20 09:17:06.186849 | orchestrator | Saturday 20 September 2025 09:17:03 +0000 (0:00:00.222) 0:00:20.892 **** 2025-09-20 09:17:06.186859 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.186869 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:06.186878 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:06.186888 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.186898 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:06.186907 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.186917 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.186927 | orchestrator | 2025-09-20 09:17:06.186936 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-20 09:17:06.186946 | orchestrator | Saturday 20 September 2025 09:17:04 +0000 (0:00:00.947) 0:00:21.839 **** 2025-09-20 09:17:06.186956 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.186966 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:06.186975 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:06.186985 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:06.186995 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.187004 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.187014 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:06.187024 | orchestrator | 2025-09-20 09:17:06.187033 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-20 09:17:06.187043 | orchestrator | Saturday 20 September 2025 09:17:05 +0000 (0:00:00.529) 0:00:22.368 **** 2025-09-20 09:17:06.187053 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:06.187063 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:06.187073 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:06.187082 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:06.187105 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:46.723815 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:46.723924 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.723941 | orchestrator | 2025-09-20 09:17:46.723954 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-20 09:17:46.723968 | orchestrator | Saturday 20 September 2025 09:17:06 +0000 (0:00:00.968) 0:00:23.337 **** 2025-09-20 09:17:46.723979 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.723990 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.724001 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.724012 | orchestrator | changed: [testbed-manager] 2025-09-20 09:17:46.724023 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:46.724034 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:46.724045 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:46.724055 | orchestrator | 2025-09-20 09:17:46.724067 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-20 09:17:46.724078 | orchestrator | Saturday 20 September 2025 09:17:23 +0000 (0:00:16.970) 0:00:40.308 **** 2025-09-20 09:17:46.724089 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.724100 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.724111 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.724121 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.724132 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.724143 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.724154 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.724165 | orchestrator | 2025-09-20 09:17:46.724176 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-20 09:17:46.724187 | orchestrator | Saturday 20 September 2025 09:17:23 +0000 (0:00:00.205) 0:00:40.513 **** 2025-09-20 09:17:46.724198 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.724209 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.724220 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.724231 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.724242 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.724252 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.724263 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.724274 | orchestrator | 2025-09-20 09:17:46.724285 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-20 09:17:46.724299 | orchestrator | Saturday 20 September 2025 09:17:23 +0000 (0:00:00.240) 0:00:40.753 **** 2025-09-20 09:17:46.724311 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.724324 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.724337 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.724349 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.724362 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.724374 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.724387 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.724399 | orchestrator | 2025-09-20 09:17:46.724412 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-20 09:17:46.724425 | orchestrator | Saturday 20 September 2025 09:17:23 +0000 (0:00:00.208) 0:00:40.962 **** 2025-09-20 09:17:46.724459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:46.724475 | orchestrator | 2025-09-20 09:17:46.724488 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-20 09:17:46.724501 | orchestrator | Saturday 20 September 2025 09:17:24 +0000 (0:00:00.254) 0:00:41.216 **** 2025-09-20 09:17:46.724514 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.724527 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.724539 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.724552 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.724564 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.724577 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.724610 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.724623 | orchestrator | 2025-09-20 09:17:46.724636 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-20 09:17:46.724648 | orchestrator | Saturday 20 September 2025 09:17:25 +0000 (0:00:01.399) 0:00:42.615 **** 2025-09-20 09:17:46.724658 | orchestrator | changed: [testbed-manager] 2025-09-20 09:17:46.724669 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:46.724680 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:46.724690 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:46.724701 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:46.724711 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:46.724722 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:46.724733 | orchestrator | 2025-09-20 09:17:46.724744 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-20 09:17:46.724754 | orchestrator | Saturday 20 September 2025 09:17:26 +0000 (0:00:01.049) 0:00:43.665 **** 2025-09-20 09:17:46.724766 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.724798 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.724809 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.724819 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.724830 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.724841 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.724851 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.724862 | orchestrator | 2025-09-20 09:17:46.724873 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-20 09:17:46.724884 | orchestrator | Saturday 20 September 2025 09:17:27 +0000 (0:00:00.821) 0:00:44.487 **** 2025-09-20 09:17:46.724896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:46.724909 | orchestrator | 2025-09-20 09:17:46.724920 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-20 09:17:46.724932 | orchestrator | Saturday 20 September 2025 09:17:27 +0000 (0:00:00.309) 0:00:44.796 **** 2025-09-20 09:17:46.724942 | orchestrator | changed: [testbed-manager] 2025-09-20 09:17:46.724953 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:46.724964 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:46.724974 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:46.724985 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:46.724996 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:46.725007 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:46.725017 | orchestrator | 2025-09-20 09:17:46.725045 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-20 09:17:46.725056 | orchestrator | Saturday 20 September 2025 09:17:28 +0000 (0:00:01.040) 0:00:45.837 **** 2025-09-20 09:17:46.725067 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:17:46.725078 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:17:46.725089 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:17:46.725100 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:17:46.725110 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:17:46.725121 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:17:46.725132 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:17:46.725143 | orchestrator | 2025-09-20 09:17:46.725153 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-20 09:17:46.725164 | orchestrator | Saturday 20 September 2025 09:17:28 +0000 (0:00:00.312) 0:00:46.149 **** 2025-09-20 09:17:46.725175 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:46.725186 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:46.725196 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:46.725207 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:46.725218 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:46.725229 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:46.725240 | orchestrator | changed: [testbed-manager] 2025-09-20 09:17:46.725260 | orchestrator | 2025-09-20 09:17:46.725271 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-20 09:17:46.725282 | orchestrator | Saturday 20 September 2025 09:17:41 +0000 (0:00:12.757) 0:00:58.907 **** 2025-09-20 09:17:46.725293 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.725304 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.725315 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.725326 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.725336 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.725347 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.725358 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.725369 | orchestrator | 2025-09-20 09:17:46.725380 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-20 09:17:46.725391 | orchestrator | Saturday 20 September 2025 09:17:42 +0000 (0:00:00.915) 0:00:59.823 **** 2025-09-20 09:17:46.725402 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.725412 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.725423 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.725434 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.725445 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.725456 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.725466 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.725477 | orchestrator | 2025-09-20 09:17:46.725488 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-20 09:17:46.725499 | orchestrator | Saturday 20 September 2025 09:17:43 +0000 (0:00:00.875) 0:01:00.699 **** 2025-09-20 09:17:46.725510 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.725521 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.725531 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.725542 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.725553 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.725564 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.725575 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.725586 | orchestrator | 2025-09-20 09:17:46.725597 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-20 09:17:46.725608 | orchestrator | Saturday 20 September 2025 09:17:43 +0000 (0:00:00.241) 0:01:00.940 **** 2025-09-20 09:17:46.725619 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.725630 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.725640 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.725651 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.725662 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.725673 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.725683 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.725694 | orchestrator | 2025-09-20 09:17:46.725705 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-20 09:17:46.725716 | orchestrator | Saturday 20 September 2025 09:17:44 +0000 (0:00:00.283) 0:01:01.223 **** 2025-09-20 09:17:46.725727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:17:46.725739 | orchestrator | 2025-09-20 09:17:46.725750 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-20 09:17:46.725760 | orchestrator | Saturday 20 September 2025 09:17:44 +0000 (0:00:00.334) 0:01:01.558 **** 2025-09-20 09:17:46.725787 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.725799 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.725810 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.725820 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.725831 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.725842 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.725853 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.725863 | orchestrator | 2025-09-20 09:17:46.725874 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-20 09:17:46.725885 | orchestrator | Saturday 20 September 2025 09:17:45 +0000 (0:00:01.507) 0:01:03.066 **** 2025-09-20 09:17:46.725903 | orchestrator | changed: [testbed-manager] 2025-09-20 09:17:46.725914 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:17:46.725925 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:17:46.725935 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:17:46.725946 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:17:46.725956 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:17:46.725967 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:17:46.725978 | orchestrator | 2025-09-20 09:17:46.725989 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-20 09:17:46.725999 | orchestrator | Saturday 20 September 2025 09:17:46 +0000 (0:00:00.592) 0:01:03.658 **** 2025-09-20 09:17:46.726010 | orchestrator | ok: [testbed-manager] 2025-09-20 09:17:46.726068 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:17:46.726079 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:17:46.726090 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:17:46.726101 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:17:46.726112 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:17:46.726123 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:17:46.726166 | orchestrator | 2025-09-20 09:17:46.726186 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-20 09:20:15.283363 | orchestrator | Saturday 20 September 2025 09:17:46 +0000 (0:00:00.217) 0:01:03.876 **** 2025-09-20 09:20:15.283466 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:15.283480 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:15.283491 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:15.283501 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:15.283511 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:15.283521 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:15.283531 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:15.283541 | orchestrator | 2025-09-20 09:20:15.283552 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-20 09:20:15.283563 | orchestrator | Saturday 20 September 2025 09:17:47 +0000 (0:00:01.235) 0:01:05.112 **** 2025-09-20 09:20:15.283572 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:15.283583 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:15.283593 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:15.283603 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:15.283612 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:15.283622 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:15.283632 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:15.283642 | orchestrator | 2025-09-20 09:20:15.283652 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-20 09:20:15.283662 | orchestrator | Saturday 20 September 2025 09:17:49 +0000 (0:00:01.600) 0:01:06.713 **** 2025-09-20 09:20:15.283672 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:15.283682 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:15.283692 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:15.283702 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:15.283711 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:15.283721 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:15.283731 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:15.283741 | orchestrator | 2025-09-20 09:20:15.283751 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-20 09:20:15.283761 | orchestrator | Saturday 20 September 2025 09:17:51 +0000 (0:00:02.203) 0:01:08.917 **** 2025-09-20 09:20:15.283770 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:15.283780 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:15.283790 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:15.283800 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:15.283826 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:15.283884 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:15.283895 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:15.283904 | orchestrator | 2025-09-20 09:20:15.283916 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-20 09:20:15.283948 | orchestrator | Saturday 20 September 2025 09:18:34 +0000 (0:00:42.537) 0:01:51.454 **** 2025-09-20 09:20:15.283959 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:15.283971 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:15.283982 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:15.283993 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:15.284003 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:15.284014 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:15.284025 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:15.284036 | orchestrator | 2025-09-20 09:20:15.284052 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-20 09:20:15.284064 | orchestrator | Saturday 20 September 2025 09:19:54 +0000 (0:01:20.705) 0:03:12.159 **** 2025-09-20 09:20:15.284075 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:15.284087 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:15.284098 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:15.284108 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:15.284119 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:15.284130 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:15.284141 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:15.284152 | orchestrator | 2025-09-20 09:20:15.284163 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-20 09:20:15.284175 | orchestrator | Saturday 20 September 2025 09:19:56 +0000 (0:00:01.841) 0:03:14.001 **** 2025-09-20 09:20:15.284186 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:15.284197 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:15.284208 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:15.284218 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:15.284229 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:15.284239 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:15.284251 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:15.284262 | orchestrator | 2025-09-20 09:20:15.284273 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-20 09:20:15.284283 | orchestrator | Saturday 20 September 2025 09:20:08 +0000 (0:00:11.941) 0:03:25.942 **** 2025-09-20 09:20:15.284300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-20 09:20:15.284315 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-20 09:20:15.284347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-20 09:20:15.284364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-20 09:20:15.284382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-20 09:20:15.284392 | orchestrator | 2025-09-20 09:20:15.284402 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-20 09:20:15.284412 | orchestrator | Saturday 20 September 2025 09:20:09 +0000 (0:00:00.342) 0:03:26.284 **** 2025-09-20 09:20:15.284422 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 09:20:15.284432 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:15.284442 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 09:20:15.284451 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:20:15.284461 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 09:20:15.284471 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:20:15.284480 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-20 09:20:15.284490 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:20:15.284500 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 09:20:15.284509 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 09:20:15.284519 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 09:20:15.284529 | orchestrator | 2025-09-20 09:20:15.284538 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-20 09:20:15.284552 | orchestrator | Saturday 20 September 2025 09:20:09 +0000 (0:00:00.606) 0:03:26.890 **** 2025-09-20 09:20:15.284562 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 09:20:15.284573 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 09:20:15.284583 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 09:20:15.284593 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 09:20:15.284603 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 09:20:15.284612 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 09:20:15.284622 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 09:20:15.284631 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 09:20:15.284641 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 09:20:15.284651 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 09:20:15.284660 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:15.284670 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 09:20:15.284680 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 09:20:15.284690 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 09:20:15.284700 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 09:20:15.284709 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 09:20:15.284719 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 09:20:15.284735 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 09:20:15.284744 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 09:20:15.284754 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 09:20:15.284764 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 09:20:15.284779 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 09:20:17.522132 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 09:20:17.522235 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 09:20:17.522251 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 09:20:17.522263 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 09:20:17.522275 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 09:20:17.522287 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 09:20:17.522298 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 09:20:17.522309 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 09:20:17.522320 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:20:17.522333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 09:20:17.522344 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:20:17.522355 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-20 09:20:17.522366 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-20 09:20:17.522377 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-20 09:20:17.522388 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-20 09:20:17.522399 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-20 09:20:17.522410 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-20 09:20:17.522421 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-20 09:20:17.522432 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-20 09:20:17.522442 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-20 09:20:17.522471 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-20 09:20:17.522483 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:20:17.522494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-20 09:20:17.522505 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-20 09:20:17.522516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-20 09:20:17.522527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-20 09:20:17.522538 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-20 09:20:17.522549 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-20 09:20:17.522560 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-20 09:20:17.522593 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-20 09:20:17.522605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-20 09:20:17.522616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-20 09:20:17.522629 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-20 09:20:17.522643 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-20 09:20:17.522655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-20 09:20:17.522668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-20 09:20:17.522681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-20 09:20:17.522694 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-20 09:20:17.522707 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-20 09:20:17.522719 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-20 09:20:17.522732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-20 09:20:17.522745 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-20 09:20:17.522758 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-20 09:20:17.522787 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-20 09:20:17.522801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-20 09:20:17.522815 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-20 09:20:17.522827 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-20 09:20:17.522867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-20 09:20:17.522880 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-20 09:20:17.522893 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-20 09:20:17.522906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-20 09:20:17.522918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-20 09:20:17.522931 | orchestrator | 2025-09-20 09:20:17.522945 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-20 09:20:17.522958 | orchestrator | Saturday 20 September 2025 09:20:15 +0000 (0:00:05.537) 0:03:32.428 **** 2025-09-20 09:20:17.522971 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.522982 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.522993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.523004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.523015 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.523026 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.523037 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-20 09:20:17.523048 | orchestrator | 2025-09-20 09:20:17.523059 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-20 09:20:17.523077 | orchestrator | Saturday 20 September 2025 09:20:15 +0000 (0:00:00.660) 0:03:33.089 **** 2025-09-20 09:20:17.523089 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 09:20:17.523100 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 09:20:17.523112 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:17.523123 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:20:17.523134 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 09:20:17.523145 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-20 09:20:17.523157 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:20:17.523168 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:20:17.523187 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-20 09:20:17.523198 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-20 09:20:17.523210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-20 09:20:17.523221 | orchestrator | 2025-09-20 09:20:17.523232 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-20 09:20:17.523243 | orchestrator | Saturday 20 September 2025 09:20:16 +0000 (0:00:00.641) 0:03:33.730 **** 2025-09-20 09:20:17.523253 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 09:20:17.523265 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 09:20:17.523275 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:17.523286 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:20:17.523297 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 09:20:17.523308 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:20:17.523319 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-20 09:20:17.523330 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:20:17.523341 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-20 09:20:17.523352 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-20 09:20:17.523362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-20 09:20:17.523373 | orchestrator | 2025-09-20 09:20:17.523384 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-20 09:20:17.523395 | orchestrator | Saturday 20 September 2025 09:20:17 +0000 (0:00:00.657) 0:03:34.387 **** 2025-09-20 09:20:17.523406 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:17.523417 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:20:17.523428 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:20:17.523438 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:20:17.523449 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:20:17.523466 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:20:29.301934 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:20:29.302098 | orchestrator | 2025-09-20 09:20:29.302119 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-20 09:20:29.302133 | orchestrator | Saturday 20 September 2025 09:20:17 +0000 (0:00:00.291) 0:03:34.679 **** 2025-09-20 09:20:29.302145 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:29.302158 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:29.302169 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:29.302180 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:29.302218 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:29.302229 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:29.302240 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:29.302251 | orchestrator | 2025-09-20 09:20:29.302262 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-20 09:20:29.302273 | orchestrator | Saturday 20 September 2025 09:20:23 +0000 (0:00:05.786) 0:03:40.465 **** 2025-09-20 09:20:29.302285 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-20 09:20:29.302296 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-20 09:20:29.302307 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:29.302318 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-20 09:20:29.302329 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:20:29.302340 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-20 09:20:29.302351 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:20:29.302362 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-20 09:20:29.302373 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:20:29.302384 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-20 09:20:29.302395 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:20:29.302409 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:20:29.302420 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-20 09:20:29.302431 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:20:29.302444 | orchestrator | 2025-09-20 09:20:29.302457 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-20 09:20:29.302470 | orchestrator | Saturday 20 September 2025 09:20:23 +0000 (0:00:00.294) 0:03:40.759 **** 2025-09-20 09:20:29.302483 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-20 09:20:29.302496 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-20 09:20:29.302508 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-20 09:20:29.302520 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-20 09:20:29.302532 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-20 09:20:29.302543 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-20 09:20:29.302554 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-20 09:20:29.302564 | orchestrator | 2025-09-20 09:20:29.302575 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-20 09:20:29.302600 | orchestrator | Saturday 20 September 2025 09:20:24 +0000 (0:00:01.001) 0:03:41.761 **** 2025-09-20 09:20:29.302614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:20:29.302627 | orchestrator | 2025-09-20 09:20:29.302638 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-20 09:20:29.302649 | orchestrator | Saturday 20 September 2025 09:20:25 +0000 (0:00:00.448) 0:03:42.209 **** 2025-09-20 09:20:29.302660 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:29.302671 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:29.302682 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:29.302692 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:29.302703 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:29.302714 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:29.302725 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:29.302736 | orchestrator | 2025-09-20 09:20:29.302747 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-20 09:20:29.302758 | orchestrator | Saturday 20 September 2025 09:20:26 +0000 (0:00:01.366) 0:03:43.576 **** 2025-09-20 09:20:29.302768 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:29.302779 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:29.302790 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:29.302801 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:29.302812 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:29.302823 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:29.302860 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:29.302872 | orchestrator | 2025-09-20 09:20:29.302883 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-20 09:20:29.302894 | orchestrator | Saturday 20 September 2025 09:20:27 +0000 (0:00:00.621) 0:03:44.197 **** 2025-09-20 09:20:29.302904 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:29.302926 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:29.302938 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:29.302949 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:29.302960 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:29.302971 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:29.302982 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:29.302993 | orchestrator | 2025-09-20 09:20:29.303003 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-20 09:20:29.303015 | orchestrator | Saturday 20 September 2025 09:20:27 +0000 (0:00:00.650) 0:03:44.848 **** 2025-09-20 09:20:29.303026 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:29.303037 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:29.303047 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:29.303058 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:29.303069 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:29.303080 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:29.303091 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:29.303102 | orchestrator | 2025-09-20 09:20:29.303113 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-20 09:20:29.303124 | orchestrator | Saturday 20 September 2025 09:20:28 +0000 (0:00:00.578) 0:03:45.426 **** 2025-09-20 09:20:29.303158 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358694.7161226, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303174 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358740.4022024, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303186 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358722.9767945, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303203 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358729.2274501, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303215 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358723.1668217, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303234 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358712.9291098, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303245 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758358725.358751, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:29.303275 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.430258 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.520496 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.520648 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.520740 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.520775 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.520809 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:20:45.520872 | orchestrator | 2025-09-20 09:20:45.520898 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-20 09:20:45.520917 | orchestrator | Saturday 20 September 2025 09:20:29 +0000 (0:00:01.023) 0:03:46.450 **** 2025-09-20 09:20:45.520935 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:45.520954 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:45.520971 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:45.520988 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:45.521006 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:45.521024 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:45.521127 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:45.521149 | orchestrator | 2025-09-20 09:20:45.521167 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-20 09:20:45.521185 | orchestrator | Saturday 20 September 2025 09:20:30 +0000 (0:00:01.122) 0:03:47.574 **** 2025-09-20 09:20:45.521203 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:45.521223 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:45.521242 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:45.521261 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:45.521321 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:45.521343 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:45.521363 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:45.521381 | orchestrator | 2025-09-20 09:20:45.521401 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-20 09:20:45.521419 | orchestrator | Saturday 20 September 2025 09:20:31 +0000 (0:00:01.303) 0:03:48.877 **** 2025-09-20 09:20:45.521439 | orchestrator | changed: [testbed-manager] 2025-09-20 09:20:45.521459 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:45.521479 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:45.521499 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:45.521519 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:45.521538 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:45.521558 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:45.521578 | orchestrator | 2025-09-20 09:20:45.521598 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-20 09:20:45.521616 | orchestrator | Saturday 20 September 2025 09:20:32 +0000 (0:00:01.162) 0:03:50.039 **** 2025-09-20 09:20:45.521653 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:20:45.521672 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:20:45.521690 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:20:45.521731 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:20:45.521752 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:20:45.521770 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:20:45.521791 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:20:45.521810 | orchestrator | 2025-09-20 09:20:45.521830 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-20 09:20:45.521910 | orchestrator | Saturday 20 September 2025 09:20:33 +0000 (0:00:00.271) 0:03:50.311 **** 2025-09-20 09:20:45.521931 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.521952 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:45.521972 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:45.521992 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:45.522013 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:45.522108 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:45.522128 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:45.522149 | orchestrator | 2025-09-20 09:20:45.522169 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-20 09:20:45.522190 | orchestrator | Saturday 20 September 2025 09:20:33 +0000 (0:00:00.734) 0:03:51.045 **** 2025-09-20 09:20:45.522221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:20:45.522243 | orchestrator | 2025-09-20 09:20:45.522263 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-20 09:20:45.522284 | orchestrator | Saturday 20 September 2025 09:20:34 +0000 (0:00:00.404) 0:03:51.449 **** 2025-09-20 09:20:45.522304 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.522325 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:20:45.522344 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:20:45.522364 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:20:45.522384 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:20:45.522406 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:20:45.522426 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:20:45.522447 | orchestrator | 2025-09-20 09:20:45.522467 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-20 09:20:45.522485 | orchestrator | Saturday 20 September 2025 09:20:41 +0000 (0:00:07.696) 0:03:59.146 **** 2025-09-20 09:20:45.522504 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.522523 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:45.522540 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:45.522558 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:45.522576 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:45.522594 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:45.522612 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:45.522628 | orchestrator | 2025-09-20 09:20:45.522645 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-20 09:20:45.522664 | orchestrator | Saturday 20 September 2025 09:20:43 +0000 (0:00:01.212) 0:04:00.358 **** 2025-09-20 09:20:45.522683 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.522699 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:45.522714 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:45.522730 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:45.522747 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:45.522765 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:45.522783 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:45.522801 | orchestrator | 2025-09-20 09:20:45.522820 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-20 09:20:45.522838 | orchestrator | Saturday 20 September 2025 09:20:44 +0000 (0:00:01.197) 0:04:01.555 **** 2025-09-20 09:20:45.522879 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.522910 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:45.522928 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:45.522945 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:45.522962 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:45.522980 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:45.522997 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:45.523015 | orchestrator | 2025-09-20 09:20:45.523032 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-20 09:20:45.523052 | orchestrator | Saturday 20 September 2025 09:20:44 +0000 (0:00:00.297) 0:04:01.853 **** 2025-09-20 09:20:45.523070 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.523086 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:45.523103 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:45.523120 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:45.523138 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:45.523155 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:20:45.523173 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:20:45.523190 | orchestrator | 2025-09-20 09:20:45.523206 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-20 09:20:45.523222 | orchestrator | Saturday 20 September 2025 09:20:45 +0000 (0:00:00.403) 0:04:02.257 **** 2025-09-20 09:20:45.523238 | orchestrator | ok: [testbed-manager] 2025-09-20 09:20:45.523253 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:20:45.523269 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:20:45.523284 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:20:45.523298 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:20:45.523330 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:21:55.731117 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:21:55.731222 | orchestrator | 2025-09-20 09:21:55.731239 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-20 09:21:55.731252 | orchestrator | Saturday 20 September 2025 09:20:45 +0000 (0:00:00.328) 0:04:02.585 **** 2025-09-20 09:21:55.731264 | orchestrator | ok: [testbed-manager] 2025-09-20 09:21:55.731275 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:21:55.731286 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:21:55.731297 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:21:55.731308 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:21:55.731319 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:21:55.731330 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:21:55.731341 | orchestrator | 2025-09-20 09:21:55.731352 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-20 09:21:55.731365 | orchestrator | Saturday 20 September 2025 09:20:51 +0000 (0:00:05.877) 0:04:08.462 **** 2025-09-20 09:21:55.731378 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:21:55.731392 | orchestrator | 2025-09-20 09:21:55.731403 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-20 09:21:55.731414 | orchestrator | Saturday 20 September 2025 09:20:51 +0000 (0:00:00.396) 0:04:08.859 **** 2025-09-20 09:21:55.731426 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731437 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-20 09:21:55.731448 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731459 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:21:55.731470 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-20 09:21:55.731481 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731492 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:21:55.731503 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-20 09:21:55.731514 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731525 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-20 09:21:55.731536 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:21:55.731572 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:21:55.731598 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731610 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-20 09:21:55.731621 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731632 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-20 09:21:55.731643 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:21:55.731654 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:21:55.731665 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-20 09:21:55.731676 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-20 09:21:55.731687 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:21:55.731698 | orchestrator | 2025-09-20 09:21:55.731709 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-20 09:21:55.731720 | orchestrator | Saturday 20 September 2025 09:20:52 +0000 (0:00:00.332) 0:04:09.191 **** 2025-09-20 09:21:55.731731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:21:55.731743 | orchestrator | 2025-09-20 09:21:55.731754 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-20 09:21:55.731765 | orchestrator | Saturday 20 September 2025 09:20:52 +0000 (0:00:00.431) 0:04:09.623 **** 2025-09-20 09:21:55.731776 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-20 09:21:55.731787 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:21:55.731798 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-20 09:21:55.731809 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-20 09:21:55.731820 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:21:55.731831 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-20 09:21:55.731842 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:21:55.731875 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-20 09:21:55.731886 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:21:55.731897 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-20 09:21:55.731908 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:21:55.731918 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:21:55.731929 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-20 09:21:55.731940 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:21:55.731951 | orchestrator | 2025-09-20 09:21:55.731961 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-20 09:21:55.731972 | orchestrator | Saturday 20 September 2025 09:20:52 +0000 (0:00:00.286) 0:04:09.910 **** 2025-09-20 09:21:55.731983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:21:55.731994 | orchestrator | 2025-09-20 09:21:55.732005 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-20 09:21:55.732016 | orchestrator | Saturday 20 September 2025 09:20:53 +0000 (0:00:00.441) 0:04:10.351 **** 2025-09-20 09:21:55.732027 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:21:55.732054 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:21:55.732066 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:21:55.732076 | orchestrator | changed: [testbed-manager] 2025-09-20 09:21:55.732087 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:21:55.732098 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:21:55.732109 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:21:55.732120 | orchestrator | 2025-09-20 09:21:55.732131 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-20 09:21:55.732150 | orchestrator | Saturday 20 September 2025 09:21:27 +0000 (0:00:34.126) 0:04:44.478 **** 2025-09-20 09:21:55.732161 | orchestrator | changed: [testbed-manager] 2025-09-20 09:21:55.732172 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:21:55.732183 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:21:55.732194 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:21:55.732204 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:21:55.732215 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:21:55.732226 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:21:55.732236 | orchestrator | 2025-09-20 09:21:55.732247 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-20 09:21:55.732258 | orchestrator | Saturday 20 September 2025 09:21:35 +0000 (0:00:08.167) 0:04:52.645 **** 2025-09-20 09:21:55.732269 | orchestrator | changed: [testbed-manager] 2025-09-20 09:21:55.732280 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:21:55.732291 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:21:55.732301 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:21:55.732312 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:21:55.732323 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:21:55.732334 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:21:55.732344 | orchestrator | 2025-09-20 09:21:55.732355 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-20 09:21:55.732366 | orchestrator | Saturday 20 September 2025 09:21:43 +0000 (0:00:08.047) 0:05:00.692 **** 2025-09-20 09:21:55.732377 | orchestrator | ok: [testbed-manager] 2025-09-20 09:21:55.732388 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:21:55.732398 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:21:55.732409 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:21:55.732420 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:21:55.732431 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:21:55.732442 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:21:55.732452 | orchestrator | 2025-09-20 09:21:55.732463 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-20 09:21:55.732475 | orchestrator | Saturday 20 September 2025 09:21:45 +0000 (0:00:01.642) 0:05:02.334 **** 2025-09-20 09:21:55.732486 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:21:55.732497 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:21:55.732513 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:21:55.732524 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:21:55.732535 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:21:55.732546 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:21:55.732556 | orchestrator | changed: [testbed-manager] 2025-09-20 09:21:55.732567 | orchestrator | 2025-09-20 09:21:55.732578 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-20 09:21:55.732589 | orchestrator | Saturday 20 September 2025 09:21:51 +0000 (0:00:06.503) 0:05:08.838 **** 2025-09-20 09:21:55.732601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:21:55.732613 | orchestrator | 2025-09-20 09:21:55.732624 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-20 09:21:55.732635 | orchestrator | Saturday 20 September 2025 09:21:52 +0000 (0:00:00.428) 0:05:09.267 **** 2025-09-20 09:21:55.732646 | orchestrator | changed: [testbed-manager] 2025-09-20 09:21:55.732656 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:21:55.732667 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:21:55.732678 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:21:55.732689 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:21:55.732699 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:21:55.732710 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:21:55.732721 | orchestrator | 2025-09-20 09:21:55.732732 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-20 09:21:55.732749 | orchestrator | Saturday 20 September 2025 09:21:52 +0000 (0:00:00.824) 0:05:10.092 **** 2025-09-20 09:21:55.732760 | orchestrator | ok: [testbed-manager] 2025-09-20 09:21:55.732771 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:21:55.732782 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:21:55.732793 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:21:55.732804 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:21:55.732815 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:21:55.732825 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:21:55.732836 | orchestrator | 2025-09-20 09:21:55.732847 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-20 09:21:55.732892 | orchestrator | Saturday 20 September 2025 09:21:54 +0000 (0:00:01.714) 0:05:11.807 **** 2025-09-20 09:21:55.732903 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:21:55.732914 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:21:55.732925 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:21:55.732936 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:21:55.732947 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:21:55.732958 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:21:55.732969 | orchestrator | changed: [testbed-manager] 2025-09-20 09:21:55.732979 | orchestrator | 2025-09-20 09:21:55.732990 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-20 09:21:55.733001 | orchestrator | Saturday 20 September 2025 09:21:55 +0000 (0:00:00.773) 0:05:12.580 **** 2025-09-20 09:21:55.733012 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:21:55.733023 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:21:55.733034 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:21:55.733045 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:21:55.733056 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:21:55.733066 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:21:55.733077 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:21:55.733088 | orchestrator | 2025-09-20 09:21:55.733099 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-20 09:21:55.733117 | orchestrator | Saturday 20 September 2025 09:21:55 +0000 (0:00:00.304) 0:05:12.884 **** 2025-09-20 09:22:22.293561 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:22:22.293669 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:22:22.293683 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:22:22.293693 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:22:22.293703 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:22:22.293712 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:22:22.293722 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:22:22.293733 | orchestrator | 2025-09-20 09:22:22.293744 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-20 09:22:22.293755 | orchestrator | Saturday 20 September 2025 09:21:56 +0000 (0:00:00.438) 0:05:13.322 **** 2025-09-20 09:22:22.293764 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.293775 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:22:22.293785 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:22:22.293795 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:22:22.293804 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:22:22.293814 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:22:22.293824 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:22:22.293833 | orchestrator | 2025-09-20 09:22:22.293843 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-20 09:22:22.293889 | orchestrator | Saturday 20 September 2025 09:21:56 +0000 (0:00:00.306) 0:05:13.629 **** 2025-09-20 09:22:22.293900 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:22:22.293910 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:22:22.293920 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:22:22.293930 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:22:22.293940 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:22:22.293950 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:22:22.293959 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:22:22.293995 | orchestrator | 2025-09-20 09:22:22.294006 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-20 09:22:22.294069 | orchestrator | Saturday 20 September 2025 09:21:56 +0000 (0:00:00.296) 0:05:13.926 **** 2025-09-20 09:22:22.294081 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.294090 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:22:22.294102 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:22:22.294113 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:22:22.294124 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:22:22.294135 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:22:22.294146 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:22:22.294157 | orchestrator | 2025-09-20 09:22:22.294169 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-20 09:22:22.294181 | orchestrator | Saturday 20 September 2025 09:21:57 +0000 (0:00:00.305) 0:05:14.231 **** 2025-09-20 09:22:22.294192 | orchestrator | ok: [testbed-manager] =>  2025-09-20 09:22:22.294204 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294215 | orchestrator | ok: [testbed-node-0] =>  2025-09-20 09:22:22.294226 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294236 | orchestrator | ok: [testbed-node-1] =>  2025-09-20 09:22:22.294247 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294259 | orchestrator | ok: [testbed-node-2] =>  2025-09-20 09:22:22.294270 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294281 | orchestrator | ok: [testbed-node-3] =>  2025-09-20 09:22:22.294292 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294303 | orchestrator | ok: [testbed-node-4] =>  2025-09-20 09:22:22.294314 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294325 | orchestrator | ok: [testbed-node-5] =>  2025-09-20 09:22:22.294336 | orchestrator |  docker_version: 5:27.5.1 2025-09-20 09:22:22.294347 | orchestrator | 2025-09-20 09:22:22.294358 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-20 09:22:22.294370 | orchestrator | Saturday 20 September 2025 09:21:57 +0000 (0:00:00.317) 0:05:14.548 **** 2025-09-20 09:22:22.294381 | orchestrator | ok: [testbed-manager] =>  2025-09-20 09:22:22.294392 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294403 | orchestrator | ok: [testbed-node-0] =>  2025-09-20 09:22:22.294413 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294425 | orchestrator | ok: [testbed-node-1] =>  2025-09-20 09:22:22.294436 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294447 | orchestrator | ok: [testbed-node-2] =>  2025-09-20 09:22:22.294458 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294468 | orchestrator | ok: [testbed-node-3] =>  2025-09-20 09:22:22.294478 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294487 | orchestrator | ok: [testbed-node-4] =>  2025-09-20 09:22:22.294497 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294507 | orchestrator | ok: [testbed-node-5] =>  2025-09-20 09:22:22.294516 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-20 09:22:22.294526 | orchestrator | 2025-09-20 09:22:22.294535 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-20 09:22:22.294545 | orchestrator | Saturday 20 September 2025 09:21:57 +0000 (0:00:00.277) 0:05:14.826 **** 2025-09-20 09:22:22.294555 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:22:22.294565 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:22:22.294575 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:22:22.294585 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:22:22.294594 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:22:22.294604 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:22:22.294613 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:22:22.294623 | orchestrator | 2025-09-20 09:22:22.294633 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-20 09:22:22.294643 | orchestrator | Saturday 20 September 2025 09:21:57 +0000 (0:00:00.270) 0:05:15.097 **** 2025-09-20 09:22:22.294653 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:22:22.294670 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:22:22.294679 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:22:22.294689 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:22:22.294699 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:22:22.294708 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:22:22.294718 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:22:22.294728 | orchestrator | 2025-09-20 09:22:22.294738 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-20 09:22:22.294748 | orchestrator | Saturday 20 September 2025 09:21:58 +0000 (0:00:00.303) 0:05:15.400 **** 2025-09-20 09:22:22.294776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:22:22.294789 | orchestrator | 2025-09-20 09:22:22.294799 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-20 09:22:22.294809 | orchestrator | Saturday 20 September 2025 09:21:58 +0000 (0:00:00.468) 0:05:15.869 **** 2025-09-20 09:22:22.294819 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.294829 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:22:22.294839 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:22:22.294866 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:22:22.294877 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:22:22.294886 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:22:22.294896 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:22:22.294905 | orchestrator | 2025-09-20 09:22:22.294915 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-20 09:22:22.294925 | orchestrator | Saturday 20 September 2025 09:21:59 +0000 (0:00:00.837) 0:05:16.706 **** 2025-09-20 09:22:22.294935 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:22:22.294944 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:22:22.294954 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:22:22.294963 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:22:22.294973 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.294983 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:22:22.294992 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:22:22.295002 | orchestrator | 2025-09-20 09:22:22.295011 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-20 09:22:22.295022 | orchestrator | Saturday 20 September 2025 09:22:02 +0000 (0:00:03.363) 0:05:20.069 **** 2025-09-20 09:22:22.295032 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-20 09:22:22.295042 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-20 09:22:22.295051 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-20 09:22:22.295061 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-20 09:22:22.295087 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-20 09:22:22.295097 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:22:22.295106 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-20 09:22:22.295116 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-20 09:22:22.295125 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-20 09:22:22.295135 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-20 09:22:22.295145 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:22:22.295154 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-20 09:22:22.295168 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-20 09:22:22.295178 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-20 09:22:22.295188 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:22:22.295197 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-20 09:22:22.295207 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-20 09:22:22.295217 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-20 09:22:22.295233 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:22:22.295243 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-20 09:22:22.295252 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-20 09:22:22.295262 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-20 09:22:22.295272 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:22:22.295281 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:22:22.295291 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-20 09:22:22.295300 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-20 09:22:22.295310 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-20 09:22:22.295320 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:22:22.295329 | orchestrator | 2025-09-20 09:22:22.295339 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-20 09:22:22.295348 | orchestrator | Saturday 20 September 2025 09:22:03 +0000 (0:00:00.624) 0:05:20.694 **** 2025-09-20 09:22:22.295358 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.295368 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:22:22.295377 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:22:22.295387 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:22:22.295396 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:22:22.295406 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:22:22.295415 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:22:22.295425 | orchestrator | 2025-09-20 09:22:22.295435 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-20 09:22:22.295444 | orchestrator | Saturday 20 September 2025 09:22:09 +0000 (0:00:06.406) 0:05:27.100 **** 2025-09-20 09:22:22.295454 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.295464 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:22:22.295473 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:22:22.295483 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:22:22.295492 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:22:22.295502 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:22:22.295511 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:22:22.295521 | orchestrator | 2025-09-20 09:22:22.295530 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-20 09:22:22.295540 | orchestrator | Saturday 20 September 2025 09:22:11 +0000 (0:00:01.076) 0:05:28.176 **** 2025-09-20 09:22:22.295549 | orchestrator | ok: [testbed-manager] 2025-09-20 09:22:22.295559 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:22:22.295569 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:22:22.295578 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:22:22.295588 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:22:22.295597 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:22:22.295607 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:22:22.295616 | orchestrator | 2025-09-20 09:22:22.295626 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-20 09:22:22.295635 | orchestrator | Saturday 20 September 2025 09:22:18 +0000 (0:00:07.870) 0:05:36.047 **** 2025-09-20 09:22:22.295645 | orchestrator | changed: [testbed-manager] 2025-09-20 09:22:22.295655 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:22:22.295664 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:22:22.295681 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.971703 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.971809 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.971825 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.971890 | orchestrator | 2025-09-20 09:23:06.971904 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-20 09:23:06.971918 | orchestrator | Saturday 20 September 2025 09:22:22 +0000 (0:00:03.392) 0:05:39.439 **** 2025-09-20 09:23:06.971930 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.971943 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.971954 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.971989 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.972001 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.972012 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.972023 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.972034 | orchestrator | 2025-09-20 09:23:06.972045 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-20 09:23:06.972056 | orchestrator | Saturday 20 September 2025 09:22:23 +0000 (0:00:01.328) 0:05:40.768 **** 2025-09-20 09:23:06.972067 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.972078 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.972089 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.972100 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.972111 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.972122 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.972133 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.972143 | orchestrator | 2025-09-20 09:23:06.972155 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-20 09:23:06.972165 | orchestrator | Saturday 20 September 2025 09:22:24 +0000 (0:00:01.353) 0:05:42.121 **** 2025-09-20 09:23:06.972176 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.972187 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.972198 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.972209 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.972220 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.972230 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.972243 | orchestrator | changed: [testbed-manager] 2025-09-20 09:23:06.972256 | orchestrator | 2025-09-20 09:23:06.972269 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-20 09:23:06.972281 | orchestrator | Saturday 20 September 2025 09:22:25 +0000 (0:00:00.836) 0:05:42.958 **** 2025-09-20 09:23:06.972294 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.972307 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.972320 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.972348 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.972361 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.972373 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.972387 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.972399 | orchestrator | 2025-09-20 09:23:06.972412 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-20 09:23:06.972425 | orchestrator | Saturday 20 September 2025 09:22:36 +0000 (0:00:10.252) 0:05:53.210 **** 2025-09-20 09:23:06.972438 | orchestrator | changed: [testbed-manager] 2025-09-20 09:23:06.972450 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.972463 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.972477 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.972490 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.972503 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.972515 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.972527 | orchestrator | 2025-09-20 09:23:06.972540 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-20 09:23:06.972554 | orchestrator | Saturday 20 September 2025 09:22:36 +0000 (0:00:00.879) 0:05:54.089 **** 2025-09-20 09:23:06.972567 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.972580 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.972593 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.972604 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.972615 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.972626 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.972637 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.972648 | orchestrator | 2025-09-20 09:23:06.972660 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-20 09:23:06.972671 | orchestrator | Saturday 20 September 2025 09:22:45 +0000 (0:00:09.029) 0:06:03.119 **** 2025-09-20 09:23:06.972690 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.972701 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.972712 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.972723 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.972734 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.972745 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.972756 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.972767 | orchestrator | 2025-09-20 09:23:06.972778 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-20 09:23:06.972789 | orchestrator | Saturday 20 September 2025 09:22:56 +0000 (0:00:10.751) 0:06:13.870 **** 2025-09-20 09:23:06.972800 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-20 09:23:06.972812 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-20 09:23:06.972823 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-20 09:23:06.972834 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-20 09:23:06.972861 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-20 09:23:06.972872 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-20 09:23:06.972883 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-20 09:23:06.972894 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-20 09:23:06.972905 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-20 09:23:06.972916 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-20 09:23:06.972927 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-20 09:23:06.972938 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-20 09:23:06.972949 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-20 09:23:06.972960 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-20 09:23:06.972971 | orchestrator | 2025-09-20 09:23:06.972983 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-20 09:23:06.973010 | orchestrator | Saturday 20 September 2025 09:22:58 +0000 (0:00:01.492) 0:06:15.362 **** 2025-09-20 09:23:06.973022 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:06.973033 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.973043 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.973054 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.973073 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.973092 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.973109 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.973126 | orchestrator | 2025-09-20 09:23:06.973144 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-20 09:23:06.973162 | orchestrator | Saturday 20 September 2025 09:22:58 +0000 (0:00:00.558) 0:06:15.921 **** 2025-09-20 09:23:06.973180 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.973199 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:06.973219 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:06.973238 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:06.973256 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:06.973275 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:06.973295 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:06.973315 | orchestrator | 2025-09-20 09:23:06.973335 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-20 09:23:06.973356 | orchestrator | Saturday 20 September 2025 09:23:02 +0000 (0:00:03.787) 0:06:19.709 **** 2025-09-20 09:23:06.973375 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:06.973395 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.973415 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.973436 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.973455 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.973473 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.973492 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.973526 | orchestrator | 2025-09-20 09:23:06.973548 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-20 09:23:06.973568 | orchestrator | Saturday 20 September 2025 09:23:03 +0000 (0:00:00.525) 0:06:20.234 **** 2025-09-20 09:23:06.973587 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-20 09:23:06.973605 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-20 09:23:06.973616 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:06.973626 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-20 09:23:06.973646 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-20 09:23:06.973657 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.973668 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-20 09:23:06.973678 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-20 09:23:06.973689 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.973700 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-20 09:23:06.973711 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-20 09:23:06.973721 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.973732 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-20 09:23:06.973743 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-20 09:23:06.973753 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.973764 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-20 09:23:06.973775 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-20 09:23:06.973785 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.973796 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-20 09:23:06.973807 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-20 09:23:06.973818 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.973828 | orchestrator | 2025-09-20 09:23:06.973879 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-20 09:23:06.973891 | orchestrator | Saturday 20 September 2025 09:23:03 +0000 (0:00:00.706) 0:06:20.940 **** 2025-09-20 09:23:06.973902 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:06.973913 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.973923 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.973934 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.973945 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.973955 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.973966 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.973977 | orchestrator | 2025-09-20 09:23:06.973987 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-20 09:23:06.973998 | orchestrator | Saturday 20 September 2025 09:23:04 +0000 (0:00:00.495) 0:06:21.436 **** 2025-09-20 09:23:06.974009 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:06.974074 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.974085 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.974096 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.974107 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.974117 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.974128 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.974139 | orchestrator | 2025-09-20 09:23:06.974185 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-20 09:23:06.974196 | orchestrator | Saturday 20 September 2025 09:23:04 +0000 (0:00:00.550) 0:06:21.987 **** 2025-09-20 09:23:06.974207 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:06.974218 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:06.974229 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:06.974239 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:06.974250 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:06.974270 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:06.974281 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:06.974292 | orchestrator | 2025-09-20 09:23:06.974303 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-20 09:23:06.974314 | orchestrator | Saturday 20 September 2025 09:23:05 +0000 (0:00:00.530) 0:06:22.517 **** 2025-09-20 09:23:06.974325 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:06.974350 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.571607 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.571719 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:29.571734 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.571746 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:29.571757 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:29.571769 | orchestrator | 2025-09-20 09:23:29.571782 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-20 09:23:29.571795 | orchestrator | Saturday 20 September 2025 09:23:06 +0000 (0:00:01.604) 0:06:24.122 **** 2025-09-20 09:23:29.571807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:23:29.571820 | orchestrator | 2025-09-20 09:23:29.571884 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-20 09:23:29.571896 | orchestrator | Saturday 20 September 2025 09:23:08 +0000 (0:00:01.097) 0:06:25.220 **** 2025-09-20 09:23:29.571907 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.571918 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.571930 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.571941 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.571952 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.571963 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.571973 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.571984 | orchestrator | 2025-09-20 09:23:29.571995 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-20 09:23:29.572006 | orchestrator | Saturday 20 September 2025 09:23:08 +0000 (0:00:00.831) 0:06:26.052 **** 2025-09-20 09:23:29.572017 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.572028 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.572038 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.572049 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.572061 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.572072 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.572083 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.572094 | orchestrator | 2025-09-20 09:23:29.572105 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-20 09:23:29.572115 | orchestrator | Saturday 20 September 2025 09:23:09 +0000 (0:00:00.846) 0:06:26.898 **** 2025-09-20 09:23:29.572126 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.572137 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.572167 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.572181 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.572193 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.572205 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.572218 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.572230 | orchestrator | 2025-09-20 09:23:29.572242 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-20 09:23:29.572256 | orchestrator | Saturday 20 September 2025 09:23:11 +0000 (0:00:01.365) 0:06:28.263 **** 2025-09-20 09:23:29.572269 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:29.572281 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.572293 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.572305 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.572318 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:29.572330 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:29.572367 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:29.572380 | orchestrator | 2025-09-20 09:23:29.572392 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-20 09:23:29.572406 | orchestrator | Saturday 20 September 2025 09:23:12 +0000 (0:00:01.625) 0:06:29.889 **** 2025-09-20 09:23:29.572418 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.572431 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.572443 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.572456 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.572468 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.572480 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.572493 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.572504 | orchestrator | 2025-09-20 09:23:29.572515 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-20 09:23:29.572526 | orchestrator | Saturday 20 September 2025 09:23:14 +0000 (0:00:01.320) 0:06:31.209 **** 2025-09-20 09:23:29.572537 | orchestrator | changed: [testbed-manager] 2025-09-20 09:23:29.572548 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.572558 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.572569 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.572580 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.572591 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.572601 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.572612 | orchestrator | 2025-09-20 09:23:29.572623 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-20 09:23:29.572634 | orchestrator | Saturday 20 September 2025 09:23:15 +0000 (0:00:01.387) 0:06:32.597 **** 2025-09-20 09:23:29.572645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:23:29.572657 | orchestrator | 2025-09-20 09:23:29.572668 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-20 09:23:29.572678 | orchestrator | Saturday 20 September 2025 09:23:16 +0000 (0:00:01.032) 0:06:33.630 **** 2025-09-20 09:23:29.572689 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.572700 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.572711 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.572722 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.572733 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:29.572744 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:29.572755 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:29.572765 | orchestrator | 2025-09-20 09:23:29.572776 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-20 09:23:29.572787 | orchestrator | Saturday 20 September 2025 09:23:17 +0000 (0:00:01.355) 0:06:34.985 **** 2025-09-20 09:23:29.572798 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.572809 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.572857 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.572870 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.572881 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:29.572892 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:29.572903 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:29.572914 | orchestrator | 2025-09-20 09:23:29.572925 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-20 09:23:29.572936 | orchestrator | Saturday 20 September 2025 09:23:18 +0000 (0:00:01.141) 0:06:36.127 **** 2025-09-20 09:23:29.572947 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.572958 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.572969 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.572980 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.572991 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:29.573002 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:29.573013 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:29.573024 | orchestrator | 2025-09-20 09:23:29.573035 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-20 09:23:29.573054 | orchestrator | Saturday 20 September 2025 09:23:20 +0000 (0:00:01.137) 0:06:37.264 **** 2025-09-20 09:23:29.573065 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.573076 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.573086 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.573097 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.573108 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:29.573119 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:29.573130 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:29.573141 | orchestrator | 2025-09-20 09:23:29.573152 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-20 09:23:29.573163 | orchestrator | Saturday 20 September 2025 09:23:21 +0000 (0:00:01.168) 0:06:38.433 **** 2025-09-20 09:23:29.573175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:23:29.573186 | orchestrator | 2025-09-20 09:23:29.573197 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573208 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:01.114) 0:06:39.548 **** 2025-09-20 09:23:29.573219 | orchestrator | 2025-09-20 09:23:29.573230 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573256 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.046) 0:06:39.594 **** 2025-09-20 09:23:29.573267 | orchestrator | 2025-09-20 09:23:29.573280 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573290 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.041) 0:06:39.635 **** 2025-09-20 09:23:29.573301 | orchestrator | 2025-09-20 09:23:29.573313 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573324 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.040) 0:06:39.676 **** 2025-09-20 09:23:29.573335 | orchestrator | 2025-09-20 09:23:29.573346 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573357 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.046) 0:06:39.722 **** 2025-09-20 09:23:29.573368 | orchestrator | 2025-09-20 09:23:29.573379 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573390 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.039) 0:06:39.761 **** 2025-09-20 09:23:29.573401 | orchestrator | 2025-09-20 09:23:29.573412 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-20 09:23:29.573423 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.039) 0:06:39.801 **** 2025-09-20 09:23:29.573434 | orchestrator | 2025-09-20 09:23:29.573445 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-20 09:23:29.573455 | orchestrator | Saturday 20 September 2025 09:23:22 +0000 (0:00:00.048) 0:06:39.849 **** 2025-09-20 09:23:29.573466 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:29.573477 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:29.573488 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:29.573499 | orchestrator | 2025-09-20 09:23:29.573510 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-20 09:23:29.573521 | orchestrator | Saturday 20 September 2025 09:23:23 +0000 (0:00:01.236) 0:06:41.086 **** 2025-09-20 09:23:29.573532 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.573543 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.573554 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.573565 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.573576 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.573587 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.573598 | orchestrator | changed: [testbed-manager] 2025-09-20 09:23:29.573609 | orchestrator | 2025-09-20 09:23:29.573620 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-20 09:23:29.573638 | orchestrator | Saturday 20 September 2025 09:23:25 +0000 (0:00:01.884) 0:06:42.970 **** 2025-09-20 09:23:29.573649 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:29.573660 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.573671 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.573681 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:29.573692 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.573703 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:29.573714 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:29.573725 | orchestrator | 2025-09-20 09:23:29.573736 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-20 09:23:29.573747 | orchestrator | Saturday 20 September 2025 09:23:28 +0000 (0:00:02.589) 0:06:45.559 **** 2025-09-20 09:23:29.573758 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:29.573768 | orchestrator | 2025-09-20 09:23:29.573787 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-20 09:23:29.573798 | orchestrator | Saturday 20 September 2025 09:23:28 +0000 (0:00:00.103) 0:06:45.663 **** 2025-09-20 09:23:29.573809 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:29.573821 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:29.573848 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:29.573859 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:29.573876 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:55.579955 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:55.580077 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:55.580093 | orchestrator | 2025-09-20 09:23:55.580106 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-20 09:23:55.580119 | orchestrator | Saturday 20 September 2025 09:23:29 +0000 (0:00:01.056) 0:06:46.719 **** 2025-09-20 09:23:55.580131 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:55.580142 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:55.580153 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:55.580164 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:55.580175 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:55.580186 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:55.580197 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:55.580208 | orchestrator | 2025-09-20 09:23:55.580220 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-20 09:23:55.580231 | orchestrator | Saturday 20 September 2025 09:23:30 +0000 (0:00:00.536) 0:06:47.256 **** 2025-09-20 09:23:55.580243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:23:55.580257 | orchestrator | 2025-09-20 09:23:55.580268 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-20 09:23:55.580280 | orchestrator | Saturday 20 September 2025 09:23:31 +0000 (0:00:00.956) 0:06:48.213 **** 2025-09-20 09:23:55.580292 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.580304 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:55.580315 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:55.580326 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:55.580337 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:55.580348 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:55.580359 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:55.580370 | orchestrator | 2025-09-20 09:23:55.580381 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-20 09:23:55.580392 | orchestrator | Saturday 20 September 2025 09:23:32 +0000 (0:00:01.115) 0:06:49.328 **** 2025-09-20 09:23:55.580403 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-20 09:23:55.580414 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-20 09:23:55.580425 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-20 09:23:55.580477 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-20 09:23:55.580492 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-20 09:23:55.580504 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-20 09:23:55.580517 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-20 09:23:55.580530 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-20 09:23:55.580544 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-20 09:23:55.580556 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-20 09:23:55.580568 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-20 09:23:55.580581 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-20 09:23:55.580594 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-20 09:23:55.580606 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-20 09:23:55.580618 | orchestrator | 2025-09-20 09:23:55.580630 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-20 09:23:55.580643 | orchestrator | Saturday 20 September 2025 09:23:34 +0000 (0:00:02.505) 0:06:51.834 **** 2025-09-20 09:23:55.580655 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:55.580667 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:55.580679 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:55.580692 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:55.580704 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:55.580716 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:55.580728 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:55.580741 | orchestrator | 2025-09-20 09:23:55.580753 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-20 09:23:55.580766 | orchestrator | Saturday 20 September 2025 09:23:35 +0000 (0:00:00.528) 0:06:52.362 **** 2025-09-20 09:23:55.580780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:23:55.580795 | orchestrator | 2025-09-20 09:23:55.580808 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-20 09:23:55.580844 | orchestrator | Saturday 20 September 2025 09:23:36 +0000 (0:00:00.996) 0:06:53.358 **** 2025-09-20 09:23:55.580903 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.580926 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:55.580948 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:55.580959 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:55.580970 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:55.580981 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:55.580992 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:55.581003 | orchestrator | 2025-09-20 09:23:55.581014 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-20 09:23:55.581025 | orchestrator | Saturday 20 September 2025 09:23:37 +0000 (0:00:00.844) 0:06:54.203 **** 2025-09-20 09:23:55.581036 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581047 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:55.581058 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:55.581068 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:55.581079 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:55.581090 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:55.581101 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:55.581112 | orchestrator | 2025-09-20 09:23:55.581123 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-20 09:23:55.581151 | orchestrator | Saturday 20 September 2025 09:23:37 +0000 (0:00:00.817) 0:06:55.021 **** 2025-09-20 09:23:55.581163 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:55.581174 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:55.581185 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:55.581213 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:55.581225 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:55.581236 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:55.581247 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:55.581258 | orchestrator | 2025-09-20 09:23:55.581269 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-20 09:23:55.581280 | orchestrator | Saturday 20 September 2025 09:23:38 +0000 (0:00:00.503) 0:06:55.525 **** 2025-09-20 09:23:55.581291 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581302 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:55.581313 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:55.581323 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:55.581334 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:55.581345 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:55.581356 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:55.581367 | orchestrator | 2025-09-20 09:23:55.581378 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-20 09:23:55.581389 | orchestrator | Saturday 20 September 2025 09:23:40 +0000 (0:00:01.660) 0:06:57.186 **** 2025-09-20 09:23:55.581401 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:55.581412 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:55.581423 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:55.581434 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:55.581445 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:55.581456 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:55.581466 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:55.581477 | orchestrator | 2025-09-20 09:23:55.581488 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-20 09:23:55.581499 | orchestrator | Saturday 20 September 2025 09:23:40 +0000 (0:00:00.503) 0:06:57.689 **** 2025-09-20 09:23:55.581510 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581521 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:55.581532 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:55.581543 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:55.581554 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:55.581565 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:55.581576 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:55.581587 | orchestrator | 2025-09-20 09:23:55.581598 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-20 09:23:55.581615 | orchestrator | Saturday 20 September 2025 09:23:48 +0000 (0:00:07.539) 0:07:05.229 **** 2025-09-20 09:23:55.581626 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581637 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:55.581648 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:55.581659 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:55.581669 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:55.581680 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:55.581691 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:55.581702 | orchestrator | 2025-09-20 09:23:55.581713 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-20 09:23:55.581724 | orchestrator | Saturday 20 September 2025 09:23:49 +0000 (0:00:01.323) 0:07:06.553 **** 2025-09-20 09:23:55.581735 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581746 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:55.581756 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:55.581767 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:55.581778 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:55.581789 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:55.581800 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:55.581810 | orchestrator | 2025-09-20 09:23:55.581843 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-20 09:23:55.581854 | orchestrator | Saturday 20 September 2025 09:23:51 +0000 (0:00:01.834) 0:07:08.388 **** 2025-09-20 09:23:55.581865 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581885 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:23:55.581896 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:23:55.581907 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:23:55.581918 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:23:55.581928 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:23:55.581940 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:23:55.581951 | orchestrator | 2025-09-20 09:23:55.581962 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 09:23:55.581973 | orchestrator | Saturday 20 September 2025 09:23:53 +0000 (0:00:01.968) 0:07:10.356 **** 2025-09-20 09:23:55.581984 | orchestrator | ok: [testbed-manager] 2025-09-20 09:23:55.581995 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:23:55.582006 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:23:55.582079 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:23:55.582094 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:23:55.582106 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:23:55.582117 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:23:55.582127 | orchestrator | 2025-09-20 09:23:55.582139 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 09:23:55.582150 | orchestrator | Saturday 20 September 2025 09:23:54 +0000 (0:00:00.861) 0:07:11.218 **** 2025-09-20 09:23:55.582161 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:55.582172 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:55.582183 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:55.582194 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:55.582205 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:55.582216 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:55.582227 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:55.582238 | orchestrator | 2025-09-20 09:23:55.582250 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-20 09:23:55.582261 | orchestrator | Saturday 20 September 2025 09:23:55 +0000 (0:00:00.987) 0:07:12.205 **** 2025-09-20 09:23:55.582272 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:23:55.582283 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:23:55.582294 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:23:55.582305 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:23:55.582316 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:23:55.582327 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:23:55.582338 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:23:55.582349 | orchestrator | 2025-09-20 09:23:55.582367 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-20 09:24:28.232774 | orchestrator | Saturday 20 September 2025 09:23:55 +0000 (0:00:00.523) 0:07:12.729 **** 2025-09-20 09:24:28.232934 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.232951 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.232963 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.232974 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.232985 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.232996 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233008 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233019 | orchestrator | 2025-09-20 09:24:28.233032 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-20 09:24:28.233044 | orchestrator | Saturday 20 September 2025 09:23:56 +0000 (0:00:00.580) 0:07:13.310 **** 2025-09-20 09:24:28.233055 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.233066 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.233077 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.233088 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.233099 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.233110 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233121 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233132 | orchestrator | 2025-09-20 09:24:28.233143 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-20 09:24:28.233154 | orchestrator | Saturday 20 September 2025 09:23:56 +0000 (0:00:00.526) 0:07:13.837 **** 2025-09-20 09:24:28.233192 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.233203 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.233214 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.233225 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.233236 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.233247 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233258 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233269 | orchestrator | 2025-09-20 09:24:28.233280 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-20 09:24:28.233291 | orchestrator | Saturday 20 September 2025 09:23:57 +0000 (0:00:00.510) 0:07:14.348 **** 2025-09-20 09:24:28.233302 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.233313 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.233324 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.233335 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.233346 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.233357 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233367 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233378 | orchestrator | 2025-09-20 09:24:28.233390 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-20 09:24:28.233415 | orchestrator | Saturday 20 September 2025 09:24:02 +0000 (0:00:05.812) 0:07:20.161 **** 2025-09-20 09:24:28.233428 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:24:28.233440 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:24:28.233451 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:24:28.233462 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:24:28.233474 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:24:28.233485 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:24:28.233496 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:24:28.233507 | orchestrator | 2025-09-20 09:24:28.233519 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-20 09:24:28.233530 | orchestrator | Saturday 20 September 2025 09:24:03 +0000 (0:00:00.541) 0:07:20.702 **** 2025-09-20 09:24:28.233544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:24:28.233558 | orchestrator | 2025-09-20 09:24:28.233570 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-20 09:24:28.233581 | orchestrator | Saturday 20 September 2025 09:24:04 +0000 (0:00:00.810) 0:07:21.513 **** 2025-09-20 09:24:28.233592 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.233604 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.233615 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.233626 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.233638 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.233649 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233660 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233672 | orchestrator | 2025-09-20 09:24:28.233683 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-20 09:24:28.233694 | orchestrator | Saturday 20 September 2025 09:24:06 +0000 (0:00:01.937) 0:07:23.451 **** 2025-09-20 09:24:28.233706 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.233717 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.233728 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.233739 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.233750 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.233761 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233772 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233783 | orchestrator | 2025-09-20 09:24:28.233795 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-20 09:24:28.233828 | orchestrator | Saturday 20 September 2025 09:24:07 +0000 (0:00:01.312) 0:07:24.763 **** 2025-09-20 09:24:28.233839 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.233851 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.233869 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.233881 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.233892 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.233903 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.233914 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.233926 | orchestrator | 2025-09-20 09:24:28.233937 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-20 09:24:28.233948 | orchestrator | Saturday 20 September 2025 09:24:08 +0000 (0:00:00.884) 0:07:25.648 **** 2025-09-20 09:24:28.233960 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.233973 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.233985 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.234075 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.234092 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.234103 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.234115 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-20 09:24:28.234126 | orchestrator | 2025-09-20 09:24:28.234138 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-20 09:24:28.234149 | orchestrator | Saturday 20 September 2025 09:24:10 +0000 (0:00:01.717) 0:07:27.365 **** 2025-09-20 09:24:28.234160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:24:28.234172 | orchestrator | 2025-09-20 09:24:28.234183 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-20 09:24:28.234194 | orchestrator | Saturday 20 September 2025 09:24:11 +0000 (0:00:00.910) 0:07:28.275 **** 2025-09-20 09:24:28.234205 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:28.234216 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:28.234227 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:28.234238 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:28.234249 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:28.234260 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:28.234271 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:28.234282 | orchestrator | 2025-09-20 09:24:28.234293 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-20 09:24:28.234304 | orchestrator | Saturday 20 September 2025 09:24:20 +0000 (0:00:09.250) 0:07:37.526 **** 2025-09-20 09:24:28.234315 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.234333 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.234345 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.234356 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.234367 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.234378 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.234389 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.234399 | orchestrator | 2025-09-20 09:24:28.234411 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-20 09:24:28.234422 | orchestrator | Saturday 20 September 2025 09:24:22 +0000 (0:00:01.975) 0:07:39.501 **** 2025-09-20 09:24:28.234433 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.234444 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.234464 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.234475 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.234486 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.234497 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.234508 | orchestrator | 2025-09-20 09:24:28.234519 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-20 09:24:28.234530 | orchestrator | Saturday 20 September 2025 09:24:23 +0000 (0:00:01.286) 0:07:40.787 **** 2025-09-20 09:24:28.234541 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:28.234552 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:28.234563 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:28.234574 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:28.234585 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:28.234595 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:28.234606 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:28.234617 | orchestrator | 2025-09-20 09:24:28.234628 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-20 09:24:28.234639 | orchestrator | 2025-09-20 09:24:28.234650 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-20 09:24:28.234661 | orchestrator | Saturday 20 September 2025 09:24:24 +0000 (0:00:01.225) 0:07:42.013 **** 2025-09-20 09:24:28.234672 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:24:28.234683 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:24:28.234694 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:24:28.234705 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:24:28.234716 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:24:28.234726 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:24:28.234737 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:24:28.234748 | orchestrator | 2025-09-20 09:24:28.234759 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-20 09:24:28.234770 | orchestrator | 2025-09-20 09:24:28.234781 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-20 09:24:28.234792 | orchestrator | Saturday 20 September 2025 09:24:25 +0000 (0:00:00.535) 0:07:42.549 **** 2025-09-20 09:24:28.234819 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:28.234831 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:28.234842 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:28.234852 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:28.234863 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:28.234874 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:28.234885 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:28.234896 | orchestrator | 2025-09-20 09:24:28.234906 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-20 09:24:28.234918 | orchestrator | Saturday 20 September 2025 09:24:26 +0000 (0:00:01.358) 0:07:43.908 **** 2025-09-20 09:24:28.234928 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:28.234939 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:28.234950 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:28.234961 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:28.234972 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:28.234983 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:28.234993 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:28.235004 | orchestrator | 2025-09-20 09:24:28.235015 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-20 09:24:28.235033 | orchestrator | Saturday 20 September 2025 09:24:28 +0000 (0:00:01.471) 0:07:45.379 **** 2025-09-20 09:24:51.515551 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:24:51.515669 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:24:51.515686 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:24:51.515698 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:24:51.515709 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:24:51.515721 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:24:51.515732 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:24:51.515743 | orchestrator | 2025-09-20 09:24:51.515808 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-20 09:24:51.515824 | orchestrator | Saturday 20 September 2025 09:24:28 +0000 (0:00:00.430) 0:07:45.809 **** 2025-09-20 09:24:51.515835 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:24:51.515847 | orchestrator | 2025-09-20 09:24:51.515859 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-20 09:24:51.515869 | orchestrator | Saturday 20 September 2025 09:24:29 +0000 (0:00:00.874) 0:07:46.684 **** 2025-09-20 09:24:51.515882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:24:51.515895 | orchestrator | 2025-09-20 09:24:51.515906 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-20 09:24:51.515916 | orchestrator | Saturday 20 September 2025 09:24:30 +0000 (0:00:00.762) 0:07:47.446 **** 2025-09-20 09:24:51.515927 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.515938 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.515948 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.515959 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.515969 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.515980 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.515990 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516001 | orchestrator | 2025-09-20 09:24:51.516012 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-20 09:24:51.516023 | orchestrator | Saturday 20 September 2025 09:24:38 +0000 (0:00:08.484) 0:07:55.930 **** 2025-09-20 09:24:51.516033 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.516044 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.516055 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.516065 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.516076 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.516087 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516100 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.516112 | orchestrator | 2025-09-20 09:24:51.516124 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-20 09:24:51.516136 | orchestrator | Saturday 20 September 2025 09:24:39 +0000 (0:00:00.844) 0:07:56.775 **** 2025-09-20 09:24:51.516149 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.516161 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.516173 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.516185 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.516198 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.516210 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516222 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.516235 | orchestrator | 2025-09-20 09:24:51.516248 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-20 09:24:51.516260 | orchestrator | Saturday 20 September 2025 09:24:41 +0000 (0:00:01.577) 0:07:58.353 **** 2025-09-20 09:24:51.516272 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.516285 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.516296 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.516306 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.516317 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.516327 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516338 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.516349 | orchestrator | 2025-09-20 09:24:51.516359 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-20 09:24:51.516370 | orchestrator | Saturday 20 September 2025 09:24:42 +0000 (0:00:01.709) 0:08:00.062 **** 2025-09-20 09:24:51.516381 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.516400 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.516410 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.516421 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.516431 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.516442 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.516452 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516463 | orchestrator | 2025-09-20 09:24:51.516474 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-20 09:24:51.516484 | orchestrator | Saturday 20 September 2025 09:24:44 +0000 (0:00:01.196) 0:08:01.259 **** 2025-09-20 09:24:51.516495 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.516506 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.516516 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.516527 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.516537 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.516548 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516558 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.516569 | orchestrator | 2025-09-20 09:24:51.516580 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-20 09:24:51.516590 | orchestrator | 2025-09-20 09:24:51.516601 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-20 09:24:51.516612 | orchestrator | Saturday 20 September 2025 09:24:45 +0000 (0:00:01.347) 0:08:02.606 **** 2025-09-20 09:24:51.516622 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:24:51.516633 | orchestrator | 2025-09-20 09:24:51.516644 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-20 09:24:51.516671 | orchestrator | Saturday 20 September 2025 09:24:46 +0000 (0:00:00.908) 0:08:03.514 **** 2025-09-20 09:24:51.516682 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:51.516694 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:51.516705 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:51.516715 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:51.516726 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:51.516737 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:51.516747 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:51.516758 | orchestrator | 2025-09-20 09:24:51.516791 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-20 09:24:51.516812 | orchestrator | Saturday 20 September 2025 09:24:47 +0000 (0:00:00.844) 0:08:04.359 **** 2025-09-20 09:24:51.516832 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.516899 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.516913 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.516924 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.516934 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.516945 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.516956 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.516966 | orchestrator | 2025-09-20 09:24:51.516977 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-20 09:24:51.516988 | orchestrator | Saturday 20 September 2025 09:24:48 +0000 (0:00:01.284) 0:08:05.644 **** 2025-09-20 09:24:51.516999 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:24:51.517010 | orchestrator | 2025-09-20 09:24:51.517021 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-20 09:24:51.517032 | orchestrator | Saturday 20 September 2025 09:24:49 +0000 (0:00:00.818) 0:08:06.462 **** 2025-09-20 09:24:51.517043 | orchestrator | ok: [testbed-manager] 2025-09-20 09:24:51.517053 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:24:51.517064 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:24:51.517075 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:24:51.517086 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:24:51.517105 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:24:51.517116 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:24:51.517127 | orchestrator | 2025-09-20 09:24:51.517138 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-20 09:24:51.517149 | orchestrator | Saturday 20 September 2025 09:24:50 +0000 (0:00:00.822) 0:08:07.284 **** 2025-09-20 09:24:51.517165 | orchestrator | changed: [testbed-manager] 2025-09-20 09:24:51.517176 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:24:51.517187 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:24:51.517198 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:24:51.517209 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:24:51.517220 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:24:51.517230 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:24:51.517241 | orchestrator | 2025-09-20 09:24:51.517252 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:24:51.517264 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-20 09:24:51.517276 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-20 09:24:51.517287 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 09:24:51.517298 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 09:24:51.517309 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 09:24:51.517320 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 09:24:51.517331 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-20 09:24:51.517342 | orchestrator | 2025-09-20 09:24:51.517353 | orchestrator | 2025-09-20 09:24:51.517364 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:24:51.517375 | orchestrator | Saturday 20 September 2025 09:24:51 +0000 (0:00:01.367) 0:08:08.652 **** 2025-09-20 09:24:51.517386 | orchestrator | =============================================================================== 2025-09-20 09:24:51.517397 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.71s 2025-09-20 09:24:51.517408 | orchestrator | osism.commons.packages : Download required packages -------------------- 42.54s 2025-09-20 09:24:51.517418 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.13s 2025-09-20 09:24:51.517429 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.97s 2025-09-20 09:24:51.517440 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.76s 2025-09-20 09:24:51.517451 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.94s 2025-09-20 09:24:51.517463 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.75s 2025-09-20 09:24:51.517473 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.25s 2025-09-20 09:24:51.517484 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.25s 2025-09-20 09:24:51.517495 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.03s 2025-09-20 09:24:51.517515 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.48s 2025-09-20 09:24:51.957054 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.17s 2025-09-20 09:24:51.957141 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.05s 2025-09-20 09:24:51.957175 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.87s 2025-09-20 09:24:51.957186 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.70s 2025-09-20 09:24:51.957196 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.54s 2025-09-20 09:24:51.957206 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.50s 2025-09-20 09:24:51.957216 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.41s 2025-09-20 09:24:51.957226 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.88s 2025-09-20 09:24:51.957235 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.81s 2025-09-20 09:24:52.247075 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-20 09:24:52.247157 | orchestrator | + osism apply network 2025-09-20 09:25:05.015959 | orchestrator | 2025-09-20 09:25:05 | INFO  | Task 5e54ae99-fed2-4273-8682-e8e3e8b57993 (network) was prepared for execution. 2025-09-20 09:25:05.016073 | orchestrator | 2025-09-20 09:25:05 | INFO  | It takes a moment until task 5e54ae99-fed2-4273-8682-e8e3e8b57993 (network) has been started and output is visible here. 2025-09-20 09:25:33.938080 | orchestrator | 2025-09-20 09:25:33.938185 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-20 09:25:33.938203 | orchestrator | 2025-09-20 09:25:33.938216 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-20 09:25:33.938228 | orchestrator | Saturday 20 September 2025 09:25:09 +0000 (0:00:00.284) 0:00:00.284 **** 2025-09-20 09:25:33.938239 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.938251 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.938262 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.938274 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.938286 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.938297 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.938307 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.938318 | orchestrator | 2025-09-20 09:25:33.938329 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-20 09:25:33.938340 | orchestrator | Saturday 20 September 2025 09:25:10 +0000 (0:00:00.700) 0:00:00.984 **** 2025-09-20 09:25:33.938353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:25:33.938367 | orchestrator | 2025-09-20 09:25:33.938379 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-20 09:25:33.938390 | orchestrator | Saturday 20 September 2025 09:25:11 +0000 (0:00:01.215) 0:00:02.200 **** 2025-09-20 09:25:33.938400 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.938411 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.938422 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.938433 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.938443 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.938454 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.938465 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.938475 | orchestrator | 2025-09-20 09:25:33.938487 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-20 09:25:33.938497 | orchestrator | Saturday 20 September 2025 09:25:13 +0000 (0:00:02.008) 0:00:04.209 **** 2025-09-20 09:25:33.938508 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.938519 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.938530 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.938541 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.938553 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.938565 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.938577 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.938589 | orchestrator | 2025-09-20 09:25:33.938602 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-20 09:25:33.938638 | orchestrator | Saturday 20 September 2025 09:25:15 +0000 (0:00:01.731) 0:00:05.940 **** 2025-09-20 09:25:33.938651 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-20 09:25:33.938664 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-20 09:25:33.938675 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-20 09:25:33.938686 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-20 09:25:33.938697 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-20 09:25:33.938708 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-20 09:25:33.938754 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-20 09:25:33.938767 | orchestrator | 2025-09-20 09:25:33.938778 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-20 09:25:33.938789 | orchestrator | Saturday 20 September 2025 09:25:16 +0000 (0:00:00.985) 0:00:06.926 **** 2025-09-20 09:25:33.938800 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 09:25:33.938811 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 09:25:33.938822 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:25:33.938833 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:25:33.938844 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 09:25:33.938855 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 09:25:33.938866 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 09:25:33.938877 | orchestrator | 2025-09-20 09:25:33.938888 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-20 09:25:33.938898 | orchestrator | Saturday 20 September 2025 09:25:19 +0000 (0:00:03.544) 0:00:10.470 **** 2025-09-20 09:25:33.938909 | orchestrator | changed: [testbed-manager] 2025-09-20 09:25:33.938921 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:25:33.938932 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:25:33.938942 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:25:33.938953 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:25:33.938964 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:25:33.938975 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:25:33.938986 | orchestrator | 2025-09-20 09:25:33.938997 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-20 09:25:33.939008 | orchestrator | Saturday 20 September 2025 09:25:21 +0000 (0:00:01.495) 0:00:11.966 **** 2025-09-20 09:25:33.939019 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:25:33.939030 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:25:33.939040 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 09:25:33.939051 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 09:25:33.939062 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 09:25:33.939073 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 09:25:33.939084 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 09:25:33.939095 | orchestrator | 2025-09-20 09:25:33.939106 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-20 09:25:33.939116 | orchestrator | Saturday 20 September 2025 09:25:23 +0000 (0:00:01.924) 0:00:13.890 **** 2025-09-20 09:25:33.939127 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.939138 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.939149 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.939160 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.939171 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.939182 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.939192 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.939203 | orchestrator | 2025-09-20 09:25:33.939214 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-20 09:25:33.939242 | orchestrator | Saturday 20 September 2025 09:25:24 +0000 (0:00:01.188) 0:00:15.078 **** 2025-09-20 09:25:33.939254 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:25:33.939265 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:25:33.939276 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:25:33.939297 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:25:33.939308 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:25:33.939319 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:25:33.939330 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:25:33.939340 | orchestrator | 2025-09-20 09:25:33.939351 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-20 09:25:33.939378 | orchestrator | Saturday 20 September 2025 09:25:25 +0000 (0:00:00.659) 0:00:15.738 **** 2025-09-20 09:25:33.939389 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.939400 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.939411 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.939422 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.939433 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.939444 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.939455 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.939466 | orchestrator | 2025-09-20 09:25:33.939477 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-20 09:25:33.939488 | orchestrator | Saturday 20 September 2025 09:25:27 +0000 (0:00:02.055) 0:00:17.793 **** 2025-09-20 09:25:33.939499 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:25:33.939510 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:25:33.939521 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:25:33.939532 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:25:33.939543 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:25:33.939554 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:25:33.939565 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-20 09:25:33.939577 | orchestrator | 2025-09-20 09:25:33.939588 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-20 09:25:33.939600 | orchestrator | Saturday 20 September 2025 09:25:27 +0000 (0:00:00.796) 0:00:18.589 **** 2025-09-20 09:25:33.939610 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.939621 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:25:33.939632 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:25:33.939643 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:25:33.939654 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:25:33.939664 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:25:33.939675 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:25:33.939686 | orchestrator | 2025-09-20 09:25:33.939697 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-20 09:25:33.939707 | orchestrator | Saturday 20 September 2025 09:25:29 +0000 (0:00:01.615) 0:00:20.205 **** 2025-09-20 09:25:33.939739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:25:33.939753 | orchestrator | 2025-09-20 09:25:33.939764 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-20 09:25:33.939775 | orchestrator | Saturday 20 September 2025 09:25:30 +0000 (0:00:01.308) 0:00:21.513 **** 2025-09-20 09:25:33.939786 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.939797 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.939808 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.939819 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.939830 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.939841 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.939852 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.939862 | orchestrator | 2025-09-20 09:25:33.939873 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-20 09:25:33.939884 | orchestrator | Saturday 20 September 2025 09:25:31 +0000 (0:00:01.049) 0:00:22.562 **** 2025-09-20 09:25:33.939896 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:33.939907 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:33.939917 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:33.939936 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:33.939947 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:33.939958 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:33.939969 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:33.939980 | orchestrator | 2025-09-20 09:25:33.939991 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-20 09:25:33.940002 | orchestrator | Saturday 20 September 2025 09:25:32 +0000 (0:00:00.859) 0:00:23.422 **** 2025-09-20 09:25:33.940013 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940024 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940035 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940046 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940057 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940068 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940079 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940090 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940100 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940111 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-20 09:25:33.940122 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940133 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940144 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940155 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-20 09:25:33.940166 | orchestrator | 2025-09-20 09:25:33.940185 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-20 09:25:50.262428 | orchestrator | Saturday 20 September 2025 09:25:33 +0000 (0:00:01.230) 0:00:24.652 **** 2025-09-20 09:25:50.262538 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:25:50.262554 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:25:50.262566 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:25:50.262578 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:25:50.262589 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:25:50.262600 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:25:50.262612 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:25:50.262624 | orchestrator | 2025-09-20 09:25:50.262653 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-20 09:25:50.262665 | orchestrator | Saturday 20 September 2025 09:25:34 +0000 (0:00:00.658) 0:00:25.311 **** 2025-09-20 09:25:50.262679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-2, testbed-manager, testbed-node-0, testbed-node-3, testbed-node-5, testbed-node-4 2025-09-20 09:25:50.262767 | orchestrator | 2025-09-20 09:25:50.262788 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-20 09:25:50.262807 | orchestrator | Saturday 20 September 2025 09:25:39 +0000 (0:00:04.728) 0:00:30.039 **** 2025-09-20 09:25:50.262830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262850 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.262966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.262978 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.262999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263091 | orchestrator | 2025-09-20 09:25:50.263104 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-20 09:25:50.263116 | orchestrator | Saturday 20 September 2025 09:25:44 +0000 (0:00:05.213) 0:00:35.252 **** 2025-09-20 09:25:50.263129 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263176 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263215 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-20 09:25:50.263240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:50.263304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:55.720128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-20 09:25:55.720239 | orchestrator | 2025-09-20 09:25:55.720258 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-20 09:25:55.720272 | orchestrator | Saturday 20 September 2025 09:25:50 +0000 (0:00:05.725) 0:00:40.978 **** 2025-09-20 09:25:55.720308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:25:55.720322 | orchestrator | 2025-09-20 09:25:55.720333 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-20 09:25:55.720344 | orchestrator | Saturday 20 September 2025 09:25:51 +0000 (0:00:01.127) 0:00:42.105 **** 2025-09-20 09:25:55.720355 | orchestrator | ok: [testbed-manager] 2025-09-20 09:25:55.720367 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:25:55.720378 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:25:55.720389 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:25:55.720399 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:25:55.720410 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:25:55.720421 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:25:55.720432 | orchestrator | 2025-09-20 09:25:55.720443 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-20 09:25:55.720454 | orchestrator | Saturday 20 September 2025 09:25:52 +0000 (0:00:01.052) 0:00:43.158 **** 2025-09-20 09:25:55.720465 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720476 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720487 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720497 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720508 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720518 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720529 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720540 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:25:55.720552 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720562 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720573 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720584 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720594 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720605 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:25:55.720616 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720626 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720655 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720669 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720681 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:25:55.720725 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720738 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720750 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:25:55.720763 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720776 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720788 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720800 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720813 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720837 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720850 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:25:55.720863 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:25:55.720875 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-20 09:25:55.720888 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-20 09:25:55.720901 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-20 09:25:55.720913 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-20 09:25:55.720925 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:25:55.720938 | orchestrator | 2025-09-20 09:25:55.720951 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-20 09:25:55.720980 | orchestrator | Saturday 20 September 2025 09:25:54 +0000 (0:00:01.822) 0:00:44.981 **** 2025-09-20 09:25:55.720993 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:25:55.721006 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:25:55.721018 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:25:55.721028 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:25:55.721039 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:25:55.721055 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:25:55.721067 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:25:55.721077 | orchestrator | 2025-09-20 09:25:55.721088 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-20 09:25:55.721099 | orchestrator | Saturday 20 September 2025 09:25:54 +0000 (0:00:00.590) 0:00:45.571 **** 2025-09-20 09:25:55.721110 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:25:55.721121 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:25:55.721131 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:25:55.721142 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:25:55.721153 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:25:55.721164 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:25:55.721175 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:25:55.721185 | orchestrator | 2025-09-20 09:25:55.721196 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:25:55.721209 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:25:55.721221 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:25:55.721232 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:25:55.721243 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:25:55.721254 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:25:55.721265 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:25:55.721276 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:25:55.721287 | orchestrator | 2025-09-20 09:25:55.721297 | orchestrator | 2025-09-20 09:25:55.721308 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:25:55.721319 | orchestrator | Saturday 20 September 2025 09:25:55 +0000 (0:00:00.612) 0:00:46.183 **** 2025-09-20 09:25:55.721330 | orchestrator | =============================================================================== 2025-09-20 09:25:55.721348 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.73s 2025-09-20 09:25:55.721359 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.21s 2025-09-20 09:25:55.721370 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.73s 2025-09-20 09:25:55.721381 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.54s 2025-09-20 09:25:55.721391 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2025-09-20 09:25:55.721402 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2025-09-20 09:25:55.721413 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.92s 2025-09-20 09:25:55.721424 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.82s 2025-09-20 09:25:55.721434 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.73s 2025-09-20 09:25:55.721445 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.62s 2025-09-20 09:25:55.721456 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.50s 2025-09-20 09:25:55.721467 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.31s 2025-09-20 09:25:55.721478 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2025-09-20 09:25:55.721488 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-09-20 09:25:55.721499 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2025-09-20 09:25:55.721510 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.13s 2025-09-20 09:25:55.721521 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2025-09-20 09:25:55.721531 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.05s 2025-09-20 09:25:55.721542 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-09-20 09:25:55.721553 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.86s 2025-09-20 09:25:55.921958 | orchestrator | + osism apply wireguard 2025-09-20 09:26:07.775154 | orchestrator | 2025-09-20 09:26:07 | INFO  | Task eb947b8d-89b6-473f-9191-5e093bd48872 (wireguard) was prepared for execution. 2025-09-20 09:26:07.775266 | orchestrator | 2025-09-20 09:26:07 | INFO  | It takes a moment until task eb947b8d-89b6-473f-9191-5e093bd48872 (wireguard) has been started and output is visible here. 2025-09-20 09:26:28.045451 | orchestrator | 2025-09-20 09:26:28.045561 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-20 09:26:28.045578 | orchestrator | 2025-09-20 09:26:28.045590 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-20 09:26:28.045621 | orchestrator | Saturday 20 September 2025 09:26:11 +0000 (0:00:00.230) 0:00:00.230 **** 2025-09-20 09:26:28.045669 | orchestrator | ok: [testbed-manager] 2025-09-20 09:26:28.045682 | orchestrator | 2025-09-20 09:26:28.045693 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-20 09:26:28.045704 | orchestrator | Saturday 20 September 2025 09:26:13 +0000 (0:00:01.654) 0:00:01.885 **** 2025-09-20 09:26:28.045715 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.045726 | orchestrator | 2025-09-20 09:26:28.045737 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-20 09:26:28.045748 | orchestrator | Saturday 20 September 2025 09:26:20 +0000 (0:00:06.775) 0:00:08.660 **** 2025-09-20 09:26:28.045759 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.045770 | orchestrator | 2025-09-20 09:26:28.045780 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-20 09:26:28.045791 | orchestrator | Saturday 20 September 2025 09:26:20 +0000 (0:00:00.560) 0:00:09.220 **** 2025-09-20 09:26:28.045802 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.045836 | orchestrator | 2025-09-20 09:26:28.045847 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-20 09:26:28.045859 | orchestrator | Saturday 20 September 2025 09:26:21 +0000 (0:00:00.446) 0:00:09.667 **** 2025-09-20 09:26:28.045870 | orchestrator | ok: [testbed-manager] 2025-09-20 09:26:28.045881 | orchestrator | 2025-09-20 09:26:28.045891 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-20 09:26:28.045902 | orchestrator | Saturday 20 September 2025 09:26:21 +0000 (0:00:00.530) 0:00:10.197 **** 2025-09-20 09:26:28.045912 | orchestrator | ok: [testbed-manager] 2025-09-20 09:26:28.045923 | orchestrator | 2025-09-20 09:26:28.045933 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-20 09:26:28.045944 | orchestrator | Saturday 20 September 2025 09:26:22 +0000 (0:00:00.554) 0:00:10.752 **** 2025-09-20 09:26:28.045955 | orchestrator | ok: [testbed-manager] 2025-09-20 09:26:28.045966 | orchestrator | 2025-09-20 09:26:28.045976 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-20 09:26:28.045987 | orchestrator | Saturday 20 September 2025 09:26:22 +0000 (0:00:00.408) 0:00:11.160 **** 2025-09-20 09:26:28.045999 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.046012 | orchestrator | 2025-09-20 09:26:28.046075 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-20 09:26:28.046088 | orchestrator | Saturday 20 September 2025 09:26:24 +0000 (0:00:01.218) 0:00:12.379 **** 2025-09-20 09:26:28.046101 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-20 09:26:28.046113 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.046126 | orchestrator | 2025-09-20 09:26:28.046139 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-20 09:26:28.046151 | orchestrator | Saturday 20 September 2025 09:26:24 +0000 (0:00:00.935) 0:00:13.315 **** 2025-09-20 09:26:28.046163 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.046176 | orchestrator | 2025-09-20 09:26:28.046188 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-20 09:26:28.046201 | orchestrator | Saturday 20 September 2025 09:26:26 +0000 (0:00:01.733) 0:00:15.048 **** 2025-09-20 09:26:28.046214 | orchestrator | changed: [testbed-manager] 2025-09-20 09:26:28.046227 | orchestrator | 2025-09-20 09:26:28.046239 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:26:28.046252 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:26:28.046266 | orchestrator | 2025-09-20 09:26:28.046278 | orchestrator | 2025-09-20 09:26:28.046291 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:26:28.046304 | orchestrator | Saturday 20 September 2025 09:26:27 +0000 (0:00:00.980) 0:00:16.029 **** 2025-09-20 09:26:28.046317 | orchestrator | =============================================================================== 2025-09-20 09:26:28.046330 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.78s 2025-09-20 09:26:28.046343 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.73s 2025-09-20 09:26:28.046356 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.65s 2025-09-20 09:26:28.046366 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2025-09-20 09:26:28.046377 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-09-20 09:26:28.046388 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-09-20 09:26:28.046398 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-20 09:26:28.046409 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-09-20 09:26:28.046420 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-20 09:26:28.046430 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-09-20 09:26:28.046450 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-20 09:26:28.380322 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-20 09:26:28.411368 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-20 09:26:28.411422 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-20 09:26:28.486738 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 185 0 --:--:-- --:--:-- --:--:-- 186 2025-09-20 09:26:28.500058 | orchestrator | + osism apply --environment custom workarounds 2025-09-20 09:26:30.414484 | orchestrator | 2025-09-20 09:26:30 | INFO  | Trying to run play workarounds in environment custom 2025-09-20 09:26:40.653154 | orchestrator | 2025-09-20 09:26:40 | INFO  | Task f88aea67-6aea-4e96-b1cc-5c8b6297c492 (workarounds) was prepared for execution. 2025-09-20 09:26:40.653269 | orchestrator | 2025-09-20 09:26:40 | INFO  | It takes a moment until task f88aea67-6aea-4e96-b1cc-5c8b6297c492 (workarounds) has been started and output is visible here. 2025-09-20 09:27:07.258756 | orchestrator | 2025-09-20 09:27:07.258871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:27:07.258888 | orchestrator | 2025-09-20 09:27:07.258901 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-20 09:27:07.258913 | orchestrator | Saturday 20 September 2025 09:26:44 +0000 (0:00:00.152) 0:00:00.152 **** 2025-09-20 09:27:07.258925 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-20 09:27:07.258937 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-20 09:27:07.258949 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-20 09:27:07.258960 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-20 09:27:07.258971 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-20 09:27:07.258982 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-20 09:27:07.258993 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-20 09:27:07.259005 | orchestrator | 2025-09-20 09:27:07.259016 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-20 09:27:07.259027 | orchestrator | 2025-09-20 09:27:07.259039 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-20 09:27:07.259050 | orchestrator | Saturday 20 September 2025 09:26:45 +0000 (0:00:00.772) 0:00:00.925 **** 2025-09-20 09:27:07.259062 | orchestrator | ok: [testbed-manager] 2025-09-20 09:27:07.259075 | orchestrator | 2025-09-20 09:27:07.259086 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-20 09:27:07.259097 | orchestrator | 2025-09-20 09:27:07.259108 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-20 09:27:07.259120 | orchestrator | Saturday 20 September 2025 09:26:47 +0000 (0:00:02.489) 0:00:03.415 **** 2025-09-20 09:27:07.259131 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:27:07.259142 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:27:07.259153 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:27:07.259164 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:27:07.259175 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:27:07.259186 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:27:07.259198 | orchestrator | 2025-09-20 09:27:07.259210 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-20 09:27:07.259224 | orchestrator | 2025-09-20 09:27:07.259237 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-20 09:27:07.259250 | orchestrator | Saturday 20 September 2025 09:26:49 +0000 (0:00:01.867) 0:00:05.282 **** 2025-09-20 09:27:07.259264 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 09:27:07.259277 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 09:27:07.259309 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 09:27:07.259323 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 09:27:07.259336 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 09:27:07.259348 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-20 09:27:07.259361 | orchestrator | 2025-09-20 09:27:07.259374 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-20 09:27:07.259388 | orchestrator | Saturday 20 September 2025 09:26:51 +0000 (0:00:01.553) 0:00:06.836 **** 2025-09-20 09:27:07.259401 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:27:07.259414 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:27:07.259428 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:27:07.259441 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:27:07.259453 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:27:07.259466 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:27:07.259479 | orchestrator | 2025-09-20 09:27:07.259492 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-20 09:27:07.259505 | orchestrator | Saturday 20 September 2025 09:26:55 +0000 (0:00:03.838) 0:00:10.675 **** 2025-09-20 09:27:07.259518 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:27:07.259531 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:27:07.259544 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:27:07.259556 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:27:07.259569 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:27:07.259605 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:27:07.259618 | orchestrator | 2025-09-20 09:27:07.259629 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-20 09:27:07.259640 | orchestrator | 2025-09-20 09:27:07.259651 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-20 09:27:07.259663 | orchestrator | Saturday 20 September 2025 09:26:55 +0000 (0:00:00.729) 0:00:11.404 **** 2025-09-20 09:27:07.259674 | orchestrator | changed: [testbed-manager] 2025-09-20 09:27:07.259685 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:27:07.259696 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:27:07.259708 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:27:07.259719 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:27:07.259730 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:27:07.259741 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:27:07.259752 | orchestrator | 2025-09-20 09:27:07.259763 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-20 09:27:07.259774 | orchestrator | Saturday 20 September 2025 09:26:57 +0000 (0:00:01.757) 0:00:13.161 **** 2025-09-20 09:27:07.259794 | orchestrator | changed: [testbed-manager] 2025-09-20 09:27:07.259806 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:27:07.259817 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:27:07.259828 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:27:07.259840 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:27:07.259851 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:27:07.259881 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:27:07.259893 | orchestrator | 2025-09-20 09:27:07.259904 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-20 09:27:07.259916 | orchestrator | Saturday 20 September 2025 09:26:59 +0000 (0:00:01.691) 0:00:14.853 **** 2025-09-20 09:27:07.259927 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:27:07.259939 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:27:07.259950 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:27:07.259961 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:27:07.259972 | orchestrator | ok: [testbed-manager] 2025-09-20 09:27:07.259991 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:27:07.260002 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:27:07.260013 | orchestrator | 2025-09-20 09:27:07.260025 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-20 09:27:07.260036 | orchestrator | Saturday 20 September 2025 09:27:00 +0000 (0:00:01.598) 0:00:16.451 **** 2025-09-20 09:27:07.260047 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:27:07.260058 | orchestrator | changed: [testbed-manager] 2025-09-20 09:27:07.260070 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:27:07.260081 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:27:07.260092 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:27:07.260103 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:27:07.260114 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:27:07.260125 | orchestrator | 2025-09-20 09:27:07.260136 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-20 09:27:07.260148 | orchestrator | Saturday 20 September 2025 09:27:02 +0000 (0:00:01.799) 0:00:18.251 **** 2025-09-20 09:27:07.260159 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:27:07.260170 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:27:07.260181 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:27:07.260192 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:27:07.260203 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:27:07.260214 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:27:07.260226 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:27:07.260237 | orchestrator | 2025-09-20 09:27:07.260248 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-20 09:27:07.260259 | orchestrator | 2025-09-20 09:27:07.260270 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-20 09:27:07.260282 | orchestrator | Saturday 20 September 2025 09:27:03 +0000 (0:00:00.588) 0:00:18.840 **** 2025-09-20 09:27:07.260293 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:27:07.260304 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:27:07.260315 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:27:07.260326 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:27:07.260338 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:27:07.260349 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:27:07.260360 | orchestrator | ok: [testbed-manager] 2025-09-20 09:27:07.260371 | orchestrator | 2025-09-20 09:27:07.260382 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:27:07.260395 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:27:07.260407 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:07.260419 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:07.260430 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:07.260441 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:07.260452 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:07.260463 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:07.260475 | orchestrator | 2025-09-20 09:27:07.260486 | orchestrator | 2025-09-20 09:27:07.260497 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:27:07.260509 | orchestrator | Saturday 20 September 2025 09:27:07 +0000 (0:00:03.910) 0:00:22.750 **** 2025-09-20 09:27:07.260526 | orchestrator | =============================================================================== 2025-09-20 09:27:07.260537 | orchestrator | Install python3-docker -------------------------------------------------- 3.91s 2025-09-20 09:27:07.260548 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2025-09-20 09:27:07.260560 | orchestrator | Apply netplan configuration --------------------------------------------- 2.49s 2025-09-20 09:27:07.260571 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2025-09-20 09:27:07.260608 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.80s 2025-09-20 09:27:07.260620 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.76s 2025-09-20 09:27:07.260631 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.69s 2025-09-20 09:27:07.260642 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2025-09-20 09:27:07.260658 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.55s 2025-09-20 09:27:07.260669 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-09-20 09:27:07.260680 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-09-20 09:27:07.260698 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-09-20 09:27:07.769851 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-20 09:27:19.777872 | orchestrator | 2025-09-20 09:27:19 | INFO  | Task 1dd1594b-bd7c-492d-a0cf-58c39e087fcc (reboot) was prepared for execution. 2025-09-20 09:27:19.777974 | orchestrator | 2025-09-20 09:27:19 | INFO  | It takes a moment until task 1dd1594b-bd7c-492d-a0cf-58c39e087fcc (reboot) has been started and output is visible here. 2025-09-20 09:27:29.453014 | orchestrator | 2025-09-20 09:27:29.453151 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 09:27:29.453997 | orchestrator | 2025-09-20 09:27:29.454073 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 09:27:29.454086 | orchestrator | Saturday 20 September 2025 09:27:23 +0000 (0:00:00.207) 0:00:00.207 **** 2025-09-20 09:27:29.454098 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:27:29.454110 | orchestrator | 2025-09-20 09:27:29.454122 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 09:27:29.454133 | orchestrator | Saturday 20 September 2025 09:27:23 +0000 (0:00:00.116) 0:00:00.324 **** 2025-09-20 09:27:29.454145 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:27:29.454156 | orchestrator | 2025-09-20 09:27:29.454167 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 09:27:29.454179 | orchestrator | Saturday 20 September 2025 09:27:24 +0000 (0:00:00.925) 0:00:01.249 **** 2025-09-20 09:27:29.454190 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:27:29.454201 | orchestrator | 2025-09-20 09:27:29.454213 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 09:27:29.454253 | orchestrator | 2025-09-20 09:27:29.454265 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 09:27:29.454276 | orchestrator | Saturday 20 September 2025 09:27:24 +0000 (0:00:00.116) 0:00:01.365 **** 2025-09-20 09:27:29.454287 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:27:29.454299 | orchestrator | 2025-09-20 09:27:29.454310 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 09:27:29.454321 | orchestrator | Saturday 20 September 2025 09:27:25 +0000 (0:00:00.098) 0:00:01.464 **** 2025-09-20 09:27:29.454332 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:27:29.454344 | orchestrator | 2025-09-20 09:27:29.454355 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 09:27:29.454366 | orchestrator | Saturday 20 September 2025 09:27:25 +0000 (0:00:00.651) 0:00:02.115 **** 2025-09-20 09:27:29.454377 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:27:29.454414 | orchestrator | 2025-09-20 09:27:29.454426 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 09:27:29.454437 | orchestrator | 2025-09-20 09:27:29.454448 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 09:27:29.454459 | orchestrator | Saturday 20 September 2025 09:27:25 +0000 (0:00:00.106) 0:00:02.222 **** 2025-09-20 09:27:29.454470 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:27:29.454481 | orchestrator | 2025-09-20 09:27:29.454492 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 09:27:29.454508 | orchestrator | Saturday 20 September 2025 09:27:25 +0000 (0:00:00.159) 0:00:02.382 **** 2025-09-20 09:27:29.454525 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:27:29.454537 | orchestrator | 2025-09-20 09:27:29.454548 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 09:27:29.454603 | orchestrator | Saturday 20 September 2025 09:27:26 +0000 (0:00:00.640) 0:00:03.022 **** 2025-09-20 09:27:29.454615 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:27:29.454626 | orchestrator | 2025-09-20 09:27:29.454637 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 09:27:29.454648 | orchestrator | 2025-09-20 09:27:29.454659 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 09:27:29.454670 | orchestrator | Saturday 20 September 2025 09:27:26 +0000 (0:00:00.100) 0:00:03.123 **** 2025-09-20 09:27:29.454681 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:27:29.454692 | orchestrator | 2025-09-20 09:27:29.454703 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 09:27:29.454714 | orchestrator | Saturday 20 September 2025 09:27:26 +0000 (0:00:00.099) 0:00:03.223 **** 2025-09-20 09:27:29.454725 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:27:29.454736 | orchestrator | 2025-09-20 09:27:29.454746 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 09:27:29.454757 | orchestrator | Saturday 20 September 2025 09:27:27 +0000 (0:00:00.675) 0:00:03.899 **** 2025-09-20 09:27:29.454768 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:27:29.454779 | orchestrator | 2025-09-20 09:27:29.454790 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 09:27:29.454802 | orchestrator | 2025-09-20 09:27:29.454813 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 09:27:29.454824 | orchestrator | Saturday 20 September 2025 09:27:27 +0000 (0:00:00.104) 0:00:04.003 **** 2025-09-20 09:27:29.454834 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:27:29.454845 | orchestrator | 2025-09-20 09:27:29.454856 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 09:27:29.454867 | orchestrator | Saturday 20 September 2025 09:27:27 +0000 (0:00:00.106) 0:00:04.109 **** 2025-09-20 09:27:29.454877 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:27:29.454888 | orchestrator | 2025-09-20 09:27:29.454899 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 09:27:29.454910 | orchestrator | Saturday 20 September 2025 09:27:28 +0000 (0:00:00.665) 0:00:04.775 **** 2025-09-20 09:27:29.454921 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:27:29.454932 | orchestrator | 2025-09-20 09:27:29.454958 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-20 09:27:29.454969 | orchestrator | 2025-09-20 09:27:29.454980 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-20 09:27:29.454991 | orchestrator | Saturday 20 September 2025 09:27:28 +0000 (0:00:00.109) 0:00:04.885 **** 2025-09-20 09:27:29.455002 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:27:29.455014 | orchestrator | 2025-09-20 09:27:29.455024 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-20 09:27:29.455035 | orchestrator | Saturday 20 September 2025 09:27:28 +0000 (0:00:00.081) 0:00:04.967 **** 2025-09-20 09:27:29.455046 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:27:29.455057 | orchestrator | 2025-09-20 09:27:29.455068 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-20 09:27:29.455088 | orchestrator | Saturday 20 September 2025 09:27:29 +0000 (0:00:00.649) 0:00:05.616 **** 2025-09-20 09:27:29.455146 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:27:29.455159 | orchestrator | 2025-09-20 09:27:29.455170 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:27:29.455184 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:29.455204 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:29.455215 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:29.455226 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:29.455237 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:29.455248 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:27:29.455259 | orchestrator | 2025-09-20 09:27:29.455270 | orchestrator | 2025-09-20 09:27:29.455281 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:27:29.455292 | orchestrator | Saturday 20 September 2025 09:27:29 +0000 (0:00:00.030) 0:00:05.647 **** 2025-09-20 09:27:29.455303 | orchestrator | =============================================================================== 2025-09-20 09:27:29.455314 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.21s 2025-09-20 09:27:29.455330 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.66s 2025-09-20 09:27:29.455342 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-09-20 09:27:29.670726 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-20 09:27:41.443630 | orchestrator | 2025-09-20 09:27:41 | INFO  | Task f96b1e5a-a96e-477c-b145-0aaa41a304ea (wait-for-connection) was prepared for execution. 2025-09-20 09:27:41.443744 | orchestrator | 2025-09-20 09:27:41 | INFO  | It takes a moment until task f96b1e5a-a96e-477c-b145-0aaa41a304ea (wait-for-connection) has been started and output is visible here. 2025-09-20 09:27:57.278628 | orchestrator | 2025-09-20 09:27:57.278736 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-20 09:27:57.278752 | orchestrator | 2025-09-20 09:27:57.278764 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-20 09:27:57.278776 | orchestrator | Saturday 20 September 2025 09:27:45 +0000 (0:00:00.233) 0:00:00.233 **** 2025-09-20 09:27:57.278787 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:27:57.278799 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:27:57.278810 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:27:57.278821 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:27:57.278832 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:27:57.278842 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:27:57.278853 | orchestrator | 2025-09-20 09:27:57.278864 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:27:57.278876 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:27:57.278889 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:27:57.278900 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:27:57.278938 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:27:57.278950 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:27:57.278960 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:27:57.278971 | orchestrator | 2025-09-20 09:27:57.278982 | orchestrator | 2025-09-20 09:27:57.278993 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:27:57.279004 | orchestrator | Saturday 20 September 2025 09:27:56 +0000 (0:00:11.511) 0:00:11.744 **** 2025-09-20 09:27:57.279015 | orchestrator | =============================================================================== 2025-09-20 09:27:57.279026 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.51s 2025-09-20 09:27:57.613346 | orchestrator | + osism apply hddtemp 2025-09-20 09:28:09.713475 | orchestrator | 2025-09-20 09:28:09 | INFO  | Task c390db21-4152-4714-842f-1e3337c1bda3 (hddtemp) was prepared for execution. 2025-09-20 09:28:09.713653 | orchestrator | 2025-09-20 09:28:09 | INFO  | It takes a moment until task c390db21-4152-4714-842f-1e3337c1bda3 (hddtemp) has been started and output is visible here. 2025-09-20 09:28:38.234857 | orchestrator | 2025-09-20 09:28:38.234972 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-20 09:28:38.234989 | orchestrator | 2025-09-20 09:28:38.235022 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-20 09:28:38.235035 | orchestrator | Saturday 20 September 2025 09:28:13 +0000 (0:00:00.238) 0:00:00.238 **** 2025-09-20 09:28:38.235047 | orchestrator | ok: [testbed-manager] 2025-09-20 09:28:38.235059 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:28:38.235070 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:28:38.235081 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:28:38.235092 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:28:38.235103 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:28:38.235114 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:28:38.235126 | orchestrator | 2025-09-20 09:28:38.235137 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-20 09:28:38.235148 | orchestrator | Saturday 20 September 2025 09:28:14 +0000 (0:00:00.595) 0:00:00.834 **** 2025-09-20 09:28:38.235162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:28:38.235176 | orchestrator | 2025-09-20 09:28:38.235187 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-20 09:28:38.235199 | orchestrator | Saturday 20 September 2025 09:28:15 +0000 (0:00:01.119) 0:00:01.953 **** 2025-09-20 09:28:38.235210 | orchestrator | ok: [testbed-manager] 2025-09-20 09:28:38.235221 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:28:38.235232 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:28:38.235244 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:28:38.235254 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:28:38.235266 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:28:38.235277 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:28:38.235288 | orchestrator | 2025-09-20 09:28:38.235299 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-20 09:28:38.235310 | orchestrator | Saturday 20 September 2025 09:28:17 +0000 (0:00:02.038) 0:00:03.991 **** 2025-09-20 09:28:38.235321 | orchestrator | changed: [testbed-manager] 2025-09-20 09:28:38.235333 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:28:38.235344 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:28:38.235356 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:28:38.235367 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:28:38.235401 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:28:38.235416 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:28:38.235429 | orchestrator | 2025-09-20 09:28:38.235442 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-20 09:28:38.235455 | orchestrator | Saturday 20 September 2025 09:28:18 +0000 (0:00:01.023) 0:00:05.015 **** 2025-09-20 09:28:38.235467 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:28:38.235480 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:28:38.235492 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:28:38.235532 | orchestrator | ok: [testbed-manager] 2025-09-20 09:28:38.235544 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:28:38.235557 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:28:38.235570 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:28:38.235582 | orchestrator | 2025-09-20 09:28:38.235595 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-20 09:28:38.235608 | orchestrator | Saturday 20 September 2025 09:28:20 +0000 (0:00:01.736) 0:00:06.752 **** 2025-09-20 09:28:38.235621 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:28:38.235633 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:28:38.235646 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:28:38.235658 | orchestrator | changed: [testbed-manager] 2025-09-20 09:28:38.235671 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:28:38.235684 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:28:38.235696 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:28:38.235709 | orchestrator | 2025-09-20 09:28:38.235721 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-20 09:28:38.235734 | orchestrator | Saturday 20 September 2025 09:28:20 +0000 (0:00:00.738) 0:00:07.491 **** 2025-09-20 09:28:38.235746 | orchestrator | changed: [testbed-manager] 2025-09-20 09:28:38.235758 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:28:38.235769 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:28:38.235779 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:28:38.235790 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:28:38.235801 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:28:38.235812 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:28:38.235823 | orchestrator | 2025-09-20 09:28:38.235834 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-20 09:28:38.235845 | orchestrator | Saturday 20 September 2025 09:28:34 +0000 (0:00:13.513) 0:00:21.004 **** 2025-09-20 09:28:38.235857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:28:38.235868 | orchestrator | 2025-09-20 09:28:38.235879 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-20 09:28:38.235890 | orchestrator | Saturday 20 September 2025 09:28:35 +0000 (0:00:01.474) 0:00:22.478 **** 2025-09-20 09:28:38.235901 | orchestrator | changed: [testbed-manager] 2025-09-20 09:28:38.235917 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:28:38.235928 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:28:38.235939 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:28:38.235950 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:28:38.235961 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:28:38.235971 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:28:38.235982 | orchestrator | 2025-09-20 09:28:38.235993 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:28:38.236004 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:28:38.236034 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:28:38.236046 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:28:38.236066 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:28:38.236078 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:28:38.236088 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:28:38.236099 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:28:38.236110 | orchestrator | 2025-09-20 09:28:38.236121 | orchestrator | 2025-09-20 09:28:38.236132 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:28:38.236143 | orchestrator | Saturday 20 September 2025 09:28:37 +0000 (0:00:01.994) 0:00:24.473 **** 2025-09-20 09:28:38.236154 | orchestrator | =============================================================================== 2025-09-20 09:28:38.236164 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.51s 2025-09-20 09:28:38.236175 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.04s 2025-09-20 09:28:38.236186 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.99s 2025-09-20 09:28:38.236197 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.74s 2025-09-20 09:28:38.236207 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.47s 2025-09-20 09:28:38.236218 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.12s 2025-09-20 09:28:38.236229 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.02s 2025-09-20 09:28:38.236240 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.74s 2025-09-20 09:28:38.236250 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2025-09-20 09:28:38.562778 | orchestrator | ++ semver latest 7.1.1 2025-09-20 09:28:38.610937 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 09:28:38.611030 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 09:28:38.611054 | orchestrator | + sudo systemctl restart manager.service 2025-09-20 09:29:30.814571 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-20 09:29:30.814679 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-20 09:29:30.814697 | orchestrator | + local max_attempts=60 2025-09-20 09:29:30.814710 | orchestrator | + local name=ceph-ansible 2025-09-20 09:29:30.814721 | orchestrator | + local attempt_num=1 2025-09-20 09:29:30.814732 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:29:30.857337 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:29:30.857399 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:29:30.857416 | orchestrator | + sleep 5 2025-09-20 09:29:35.862108 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:29:35.900393 | orchestrator | template parsing error: template: :1:8: executing "" at <.State.Health.Status>: map has no entry for key "Health" 2025-09-20 09:29:35.904003 | orchestrator | + [[ '' == \h\e\a\l\t\h\y ]] 2025-09-20 09:29:35.904033 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:29:35.904045 | orchestrator | + sleep 5 2025-09-20 09:29:40.907404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:29:40.924262 | orchestrator | template parsing error: template: :1:8: executing "" at <.State.Health.Status>: map has no entry for key "Health" 2025-09-20 09:29:40.925286 | orchestrator | + [[ '' == \h\e\a\l\t\h\y ]] 2025-09-20 09:29:40.925316 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:29:40.925328 | orchestrator | + sleep 5 2025-09-20 09:29:45.928858 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:29:45.964605 | orchestrator | template parsing error: template: :1:8: executing "" at <.State.Health.Status>: map has no entry for key "Health" 2025-09-20 09:29:45.967778 | orchestrator | + [[ '' == \h\e\a\l\t\h\y ]] 2025-09-20 09:29:45.967809 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:29:45.967822 | orchestrator | + sleep 5 2025-09-20 09:29:50.971507 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:29:51.001348 | orchestrator | template parsing error: template: :1:8: executing "" at <.State.Health.Status>: map has no entry for key "Health" 2025-09-20 09:29:51.004551 | orchestrator | + [[ '' == \h\e\a\l\t\h\y ]] 2025-09-20 09:29:51.004588 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:29:51.004601 | orchestrator | + sleep 5 2025-09-20 09:29:56.009865 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:29:56.039368 | orchestrator | template parsing error: template: :1:8: executing "" at <.State.Health.Status>: map has no entry for key "Health" 2025-09-20 09:29:56.043024 | orchestrator | + [[ '' == \h\e\a\l\t\h\y ]] 2025-09-20 09:29:56.043074 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:29:56.043088 | orchestrator | + sleep 5 2025-09-20 09:30:01.047738 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:01.080201 | orchestrator | template parsing error: template: :1:8: executing "" at <.State.Health.Status>: map has no entry for key "Health" 2025-09-20 09:30:01.085549 | orchestrator | + [[ '' == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:01.085579 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:01.085592 | orchestrator | + sleep 5 2025-09-20 09:30:06.090284 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:06.246280 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:06.246359 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:06.246374 | orchestrator | + sleep 5 2025-09-20 09:30:11.249453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:11.267342 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:11.267446 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:11.267463 | orchestrator | + sleep 5 2025-09-20 09:30:16.271098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:16.306819 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:16.306900 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:16.306916 | orchestrator | + sleep 5 2025-09-20 09:30:21.311364 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:21.350386 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:21.350951 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:21.350980 | orchestrator | + sleep 5 2025-09-20 09:30:26.355696 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:26.391579 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:26.391643 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:26.391658 | orchestrator | + sleep 5 2025-09-20 09:30:31.397685 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:31.440510 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:31.440585 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-20 09:30:31.440600 | orchestrator | + sleep 5 2025-09-20 09:30:36.444870 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-20 09:30:36.480334 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:36.480404 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-20 09:30:36.480542 | orchestrator | + local max_attempts=60 2025-09-20 09:30:36.480557 | orchestrator | + local name=kolla-ansible 2025-09-20 09:30:36.480578 | orchestrator | + local attempt_num=1 2025-09-20 09:30:36.481333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-20 09:30:36.515265 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:36.515316 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-20 09:30:36.515329 | orchestrator | + local max_attempts=60 2025-09-20 09:30:36.515965 | orchestrator | + local name=osism-ansible 2025-09-20 09:30:36.515987 | orchestrator | + local attempt_num=1 2025-09-20 09:30:36.516474 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-20 09:30:36.556087 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-20 09:30:36.556135 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-20 09:30:36.556149 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-20 09:30:37.068779 | orchestrator | ARA in osism-ansible already disabled. 2025-09-20 09:30:37.252231 | orchestrator | + osism apply gather-facts 2025-09-20 09:30:49.572268 | orchestrator | 2025-09-20 09:30:49 | INFO  | Task 35bbcdeb-98cb-4957-a161-a393c02c0406 (gather-facts) was prepared for execution. 2025-09-20 09:30:49.572371 | orchestrator | 2025-09-20 09:30:49 | INFO  | It takes a moment until task 35bbcdeb-98cb-4957-a161-a393c02c0406 (gather-facts) has been started and output is visible here. 2025-09-20 09:31:03.245289 | orchestrator | 2025-09-20 09:31:03.245460 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 09:31:03.245479 | orchestrator | 2025-09-20 09:31:03.245492 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 09:31:03.245504 | orchestrator | Saturday 20 September 2025 09:30:53 +0000 (0:00:00.201) 0:00:00.201 **** 2025-09-20 09:31:03.245516 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:31:03.245528 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:31:03.245539 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:31:03.245551 | orchestrator | ok: [testbed-manager] 2025-09-20 09:31:03.245562 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:31:03.245573 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:31:03.245584 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:31:03.245595 | orchestrator | 2025-09-20 09:31:03.245607 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 09:31:03.245618 | orchestrator | 2025-09-20 09:31:03.245629 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 09:31:03.245640 | orchestrator | Saturday 20 September 2025 09:31:02 +0000 (0:00:09.177) 0:00:09.379 **** 2025-09-20 09:31:03.245651 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:31:03.245663 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:31:03.245674 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:31:03.245686 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:31:03.245697 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:31:03.245708 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:31:03.245719 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:31:03.245730 | orchestrator | 2025-09-20 09:31:03.245741 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:31:03.245753 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245765 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245776 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245788 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245799 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245829 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245844 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:31:03.245856 | orchestrator | 2025-09-20 09:31:03.245869 | orchestrator | 2025-09-20 09:31:03.245882 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:31:03.245895 | orchestrator | Saturday 20 September 2025 09:31:02 +0000 (0:00:00.512) 0:00:09.891 **** 2025-09-20 09:31:03.245908 | orchestrator | =============================================================================== 2025-09-20 09:31:03.245921 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.18s 2025-09-20 09:31:03.245958 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-20 09:31:03.577473 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-20 09:31:03.594234 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-20 09:31:03.608241 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-20 09:31:03.620160 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-20 09:31:03.632183 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-20 09:31:03.643956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-20 09:31:03.661669 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-20 09:31:03.675525 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-20 09:31:03.691463 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-20 09:31:03.708860 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-20 09:31:03.733365 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-20 09:31:03.748343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-20 09:31:03.761496 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-20 09:31:03.776615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-20 09:31:03.793723 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-20 09:31:03.810681 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-20 09:31:03.831634 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-20 09:31:03.847860 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-20 09:31:03.861349 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-20 09:31:03.875079 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-20 09:31:03.891828 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-20 09:31:04.076416 | orchestrator | ok: Runtime: 0:23:43.072248 2025-09-20 09:31:04.176136 | 2025-09-20 09:31:04.176263 | TASK [Deploy services] 2025-09-20 09:31:04.709037 | orchestrator | skipping: Conditional result was False 2025-09-20 09:31:04.718549 | 2025-09-20 09:31:04.718670 | TASK [Deploy in a nutshell] 2025-09-20 09:31:05.433302 | orchestrator | 2025-09-20 09:31:05.433513 | orchestrator | # PULL IMAGES 2025-09-20 09:31:05.433540 | orchestrator | 2025-09-20 09:31:05.433554 | orchestrator | + set -e 2025-09-20 09:31:05.433571 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 09:31:05.433592 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 09:31:05.433607 | orchestrator | ++ INTERACTIVE=false 2025-09-20 09:31:05.433649 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 09:31:05.433672 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 09:31:05.433686 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 09:31:05.433698 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 09:31:05.433717 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 09:31:05.433728 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 09:31:05.433746 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 09:31:05.433758 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 09:31:05.433776 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 09:31:05.433787 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 09:31:05.433801 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 09:31:05.433813 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 09:31:05.433828 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 09:31:05.433839 | orchestrator | ++ export ARA=false 2025-09-20 09:31:05.433850 | orchestrator | ++ ARA=false 2025-09-20 09:31:05.433861 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 09:31:05.433873 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 09:31:05.433884 | orchestrator | ++ export TEMPEST=false 2025-09-20 09:31:05.433894 | orchestrator | ++ TEMPEST=false 2025-09-20 09:31:05.433905 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 09:31:05.433916 | orchestrator | ++ IS_ZUUL=true 2025-09-20 09:31:05.433927 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:31:05.433939 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 09:31:05.433949 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 09:31:05.433960 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 09:31:05.433971 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 09:31:05.433982 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 09:31:05.433993 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 09:31:05.434004 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 09:31:05.434098 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 09:31:05.434124 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 09:31:05.434136 | orchestrator | + echo 2025-09-20 09:31:05.434148 | orchestrator | + echo '# PULL IMAGES' 2025-09-20 09:31:05.434159 | orchestrator | + echo 2025-09-20 09:31:05.434186 | orchestrator | ++ semver latest 7.0.0 2025-09-20 09:31:05.495476 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-20 09:31:05.495528 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 09:31:05.495543 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-20 09:31:07.426722 | orchestrator | 2025-09-20 09:31:07 | INFO  | Trying to run play pull-images in environment custom 2025-09-20 09:31:17.561360 | orchestrator | 2025-09-20 09:31:17 | INFO  | Task da368bcc-79ad-494c-bacb-3a34f5e29fc3 (pull-images) was prepared for execution. 2025-09-20 09:31:17.561458 | orchestrator | 2025-09-20 09:31:17 | INFO  | Task da368bcc-79ad-494c-bacb-3a34f5e29fc3 is running in background. No more output. Check ARA for logs. 2025-09-20 09:31:19.902471 | orchestrator | 2025-09-20 09:31:19 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-20 09:31:30.028843 | orchestrator | 2025-09-20 09:31:30 | INFO  | Task 80ae7b7f-1a14-4b53-ad20-c82f7130400f (wipe-partitions) was prepared for execution. 2025-09-20 09:31:30.028949 | orchestrator | 2025-09-20 09:31:30 | INFO  | It takes a moment until task 80ae7b7f-1a14-4b53-ad20-c82f7130400f (wipe-partitions) has been started and output is visible here. 2025-09-20 09:31:42.573408 | orchestrator | 2025-09-20 09:31:42.573553 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-20 09:31:42.573571 | orchestrator | 2025-09-20 09:31:42.573583 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-20 09:31:42.573600 | orchestrator | Saturday 20 September 2025 09:31:34 +0000 (0:00:00.124) 0:00:00.124 **** 2025-09-20 09:31:42.573613 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:31:42.573625 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:31:42.573636 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:31:42.573648 | orchestrator | 2025-09-20 09:31:42.573659 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-20 09:31:42.573691 | orchestrator | Saturday 20 September 2025 09:31:35 +0000 (0:00:00.594) 0:00:00.719 **** 2025-09-20 09:31:42.573703 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:31:42.573715 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:31:42.573730 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:31:42.573742 | orchestrator | 2025-09-20 09:31:42.573753 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-20 09:31:42.573764 | orchestrator | Saturday 20 September 2025 09:31:35 +0000 (0:00:00.233) 0:00:00.953 **** 2025-09-20 09:31:42.573775 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:31:42.573786 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:31:42.573797 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:31:42.573808 | orchestrator | 2025-09-20 09:31:42.573820 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-20 09:31:42.573831 | orchestrator | Saturday 20 September 2025 09:31:36 +0000 (0:00:00.726) 0:00:01.680 **** 2025-09-20 09:31:42.573842 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:31:42.573853 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:31:42.573864 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:31:42.573875 | orchestrator | 2025-09-20 09:31:42.573886 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-20 09:31:42.573897 | orchestrator | Saturday 20 September 2025 09:31:36 +0000 (0:00:00.230) 0:00:01.911 **** 2025-09-20 09:31:42.573908 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-20 09:31:42.573922 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-20 09:31:42.573934 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-20 09:31:42.573947 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-20 09:31:42.573960 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-20 09:31:42.573973 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-20 09:31:42.573985 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-20 09:31:42.573997 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-20 09:31:42.574010 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-20 09:31:42.574105 | orchestrator | 2025-09-20 09:31:42.574139 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-20 09:31:42.574154 | orchestrator | Saturday 20 September 2025 09:31:37 +0000 (0:00:01.151) 0:00:03.063 **** 2025-09-20 09:31:42.574167 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-20 09:31:42.574180 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-20 09:31:42.574193 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-20 09:31:42.574206 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-20 09:31:42.574218 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-20 09:31:42.574230 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-20 09:31:42.574242 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-20 09:31:42.574255 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-20 09:31:42.574267 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-20 09:31:42.574279 | orchestrator | 2025-09-20 09:31:42.574292 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-20 09:31:42.574304 | orchestrator | Saturday 20 September 2025 09:31:38 +0000 (0:00:01.299) 0:00:04.362 **** 2025-09-20 09:31:42.574315 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-20 09:31:42.574326 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-20 09:31:42.574337 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-20 09:31:42.574348 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-20 09:31:42.574359 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-20 09:31:42.574375 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-20 09:31:42.574386 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-20 09:31:42.574405 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-20 09:31:42.574416 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-20 09:31:42.574451 | orchestrator | 2025-09-20 09:31:42.574462 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-20 09:31:42.574473 | orchestrator | Saturday 20 September 2025 09:31:40 +0000 (0:00:02.145) 0:00:06.508 **** 2025-09-20 09:31:42.574484 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:31:42.574495 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:31:42.574506 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:31:42.574517 | orchestrator | 2025-09-20 09:31:42.574527 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-20 09:31:42.574538 | orchestrator | Saturday 20 September 2025 09:31:41 +0000 (0:00:00.590) 0:00:07.098 **** 2025-09-20 09:31:42.574549 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:31:42.574560 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:31:42.574571 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:31:42.574582 | orchestrator | 2025-09-20 09:31:42.574593 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:31:42.574606 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:31:42.574619 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:31:42.574648 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:31:42.574660 | orchestrator | 2025-09-20 09:31:42.574671 | orchestrator | 2025-09-20 09:31:42.574683 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:31:42.574694 | orchestrator | Saturday 20 September 2025 09:31:42 +0000 (0:00:00.655) 0:00:07.754 **** 2025-09-20 09:31:42.574704 | orchestrator | =============================================================================== 2025-09-20 09:31:42.574715 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.15s 2025-09-20 09:31:42.574726 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-09-20 09:31:42.574737 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-09-20 09:31:42.574748 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.73s 2025-09-20 09:31:42.574758 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-09-20 09:31:42.574769 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-09-20 09:31:42.574780 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-09-20 09:31:42.574791 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-20 09:31:42.574802 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-09-20 09:31:54.864101 | orchestrator | 2025-09-20 09:31:54 | INFO  | Task 0633a993-80a2-45b4-8964-d23b528e8154 (facts) was prepared for execution. 2025-09-20 09:31:54.864206 | orchestrator | 2025-09-20 09:31:54 | INFO  | It takes a moment until task 0633a993-80a2-45b4-8964-d23b528e8154 (facts) has been started and output is visible here. 2025-09-20 09:32:06.337089 | orchestrator | 2025-09-20 09:32:06.337218 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-20 09:32:06.337239 | orchestrator | 2025-09-20 09:32:06.339432 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 09:32:06.339480 | orchestrator | Saturday 20 September 2025 09:31:58 +0000 (0:00:00.243) 0:00:00.243 **** 2025-09-20 09:32:06.339493 | orchestrator | ok: [testbed-manager] 2025-09-20 09:32:06.339505 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:32:06.339516 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:32:06.339553 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:32:06.339564 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:32:06.339575 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:32:06.339586 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:32:06.339596 | orchestrator | 2025-09-20 09:32:06.339610 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 09:32:06.339621 | orchestrator | Saturday 20 September 2025 09:31:59 +0000 (0:00:01.059) 0:00:01.302 **** 2025-09-20 09:32:06.339632 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:32:06.339643 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:32:06.339654 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:32:06.339665 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:32:06.339676 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:06.339686 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:06.339697 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:06.339708 | orchestrator | 2025-09-20 09:32:06.339719 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 09:32:06.339729 | orchestrator | 2025-09-20 09:32:06.339740 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 09:32:06.339751 | orchestrator | Saturday 20 September 2025 09:32:00 +0000 (0:00:01.253) 0:00:02.556 **** 2025-09-20 09:32:06.339762 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:32:06.339772 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:32:06.339784 | orchestrator | ok: [testbed-manager] 2025-09-20 09:32:06.339795 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:32:06.339805 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:32:06.339816 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:32:06.339827 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:32:06.339837 | orchestrator | 2025-09-20 09:32:06.339848 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 09:32:06.339859 | orchestrator | 2025-09-20 09:32:06.339870 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 09:32:06.339899 | orchestrator | Saturday 20 September 2025 09:32:05 +0000 (0:00:04.614) 0:00:07.170 **** 2025-09-20 09:32:06.339911 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:32:06.339921 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:32:06.339932 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:32:06.339943 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:32:06.339954 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:06.339964 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:06.339975 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:06.339985 | orchestrator | 2025-09-20 09:32:06.339996 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:32:06.340007 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340020 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340031 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340041 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340052 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340063 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340074 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:32:06.340085 | orchestrator | 2025-09-20 09:32:06.340104 | orchestrator | 2025-09-20 09:32:06.340115 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:32:06.340125 | orchestrator | Saturday 20 September 2025 09:32:06 +0000 (0:00:00.615) 0:00:07.786 **** 2025-09-20 09:32:06.340136 | orchestrator | =============================================================================== 2025-09-20 09:32:06.340147 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2025-09-20 09:32:06.340157 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-09-20 09:32:06.340168 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-09-20 09:32:06.340179 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2025-09-20 09:32:08.311210 | orchestrator | 2025-09-20 09:32:08 | INFO  | Task 9be262c4-5875-4fe3-a13b-6755e8516aad (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-20 09:32:08.311312 | orchestrator | 2025-09-20 09:32:08 | INFO  | It takes a moment until task 9be262c4-5875-4fe3-a13b-6755e8516aad (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-20 09:32:19.136621 | orchestrator | 2025-09-20 09:32:19.136734 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-20 09:32:19.136750 | orchestrator | 2025-09-20 09:32:19.136762 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 09:32:19.136777 | orchestrator | Saturday 20 September 2025 09:32:12 +0000 (0:00:00.326) 0:00:00.326 **** 2025-09-20 09:32:19.136788 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 09:32:19.136800 | orchestrator | 2025-09-20 09:32:19.136811 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 09:32:19.136821 | orchestrator | Saturday 20 September 2025 09:32:12 +0000 (0:00:00.238) 0:00:00.565 **** 2025-09-20 09:32:19.136832 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:32:19.136844 | orchestrator | 2025-09-20 09:32:19.136855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.136866 | orchestrator | Saturday 20 September 2025 09:32:12 +0000 (0:00:00.205) 0:00:00.770 **** 2025-09-20 09:32:19.136876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-20 09:32:19.136888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-20 09:32:19.136899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-20 09:32:19.136909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-20 09:32:19.136920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-20 09:32:19.136931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-20 09:32:19.136941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-20 09:32:19.136952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-20 09:32:19.136963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-20 09:32:19.136973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-20 09:32:19.136984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-20 09:32:19.137004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-20 09:32:19.137015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-20 09:32:19.137025 | orchestrator | 2025-09-20 09:32:19.137036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137047 | orchestrator | Saturday 20 September 2025 09:32:12 +0000 (0:00:00.317) 0:00:01.088 **** 2025-09-20 09:32:19.137058 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137088 | orchestrator | 2025-09-20 09:32:19.137101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137113 | orchestrator | Saturday 20 September 2025 09:32:13 +0000 (0:00:00.369) 0:00:01.457 **** 2025-09-20 09:32:19.137125 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137137 | orchestrator | 2025-09-20 09:32:19.137150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137162 | orchestrator | Saturday 20 September 2025 09:32:13 +0000 (0:00:00.187) 0:00:01.644 **** 2025-09-20 09:32:19.137173 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137185 | orchestrator | 2025-09-20 09:32:19.137197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137209 | orchestrator | Saturday 20 September 2025 09:32:13 +0000 (0:00:00.169) 0:00:01.814 **** 2025-09-20 09:32:19.137221 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137237 | orchestrator | 2025-09-20 09:32:19.137249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137261 | orchestrator | Saturday 20 September 2025 09:32:13 +0000 (0:00:00.182) 0:00:01.997 **** 2025-09-20 09:32:19.137274 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137287 | orchestrator | 2025-09-20 09:32:19.137299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137312 | orchestrator | Saturday 20 September 2025 09:32:14 +0000 (0:00:00.231) 0:00:02.228 **** 2025-09-20 09:32:19.137324 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137336 | orchestrator | 2025-09-20 09:32:19.137348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137360 | orchestrator | Saturday 20 September 2025 09:32:14 +0000 (0:00:00.180) 0:00:02.409 **** 2025-09-20 09:32:19.137372 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137384 | orchestrator | 2025-09-20 09:32:19.137396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137409 | orchestrator | Saturday 20 September 2025 09:32:14 +0000 (0:00:00.182) 0:00:02.592 **** 2025-09-20 09:32:19.137421 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137433 | orchestrator | 2025-09-20 09:32:19.137445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137456 | orchestrator | Saturday 20 September 2025 09:32:14 +0000 (0:00:00.190) 0:00:02.782 **** 2025-09-20 09:32:19.137491 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590) 2025-09-20 09:32:19.137504 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590) 2025-09-20 09:32:19.137514 | orchestrator | 2025-09-20 09:32:19.137525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137536 | orchestrator | Saturday 20 September 2025 09:32:14 +0000 (0:00:00.396) 0:00:03.179 **** 2025-09-20 09:32:19.137564 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9) 2025-09-20 09:32:19.137576 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9) 2025-09-20 09:32:19.137587 | orchestrator | 2025-09-20 09:32:19.137598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137609 | orchestrator | Saturday 20 September 2025 09:32:15 +0000 (0:00:00.390) 0:00:03.569 **** 2025-09-20 09:32:19.137619 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650) 2025-09-20 09:32:19.137630 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650) 2025-09-20 09:32:19.137641 | orchestrator | 2025-09-20 09:32:19.137652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137662 | orchestrator | Saturday 20 September 2025 09:32:15 +0000 (0:00:00.532) 0:00:04.101 **** 2025-09-20 09:32:19.137673 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b) 2025-09-20 09:32:19.137692 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b) 2025-09-20 09:32:19.137703 | orchestrator | 2025-09-20 09:32:19.137714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:19.137725 | orchestrator | Saturday 20 September 2025 09:32:16 +0000 (0:00:00.553) 0:00:04.655 **** 2025-09-20 09:32:19.137735 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 09:32:19.137746 | orchestrator | 2025-09-20 09:32:19.137756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.137773 | orchestrator | Saturday 20 September 2025 09:32:17 +0000 (0:00:00.592) 0:00:05.247 **** 2025-09-20 09:32:19.137784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-20 09:32:19.137795 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-20 09:32:19.137805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-20 09:32:19.137816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-20 09:32:19.137827 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-20 09:32:19.137837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-20 09:32:19.137848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-20 09:32:19.137859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-20 09:32:19.137869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-20 09:32:19.137880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-20 09:32:19.137891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-20 09:32:19.137901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-20 09:32:19.137912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-20 09:32:19.137923 | orchestrator | 2025-09-20 09:32:19.137933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.137944 | orchestrator | Saturday 20 September 2025 09:32:17 +0000 (0:00:00.355) 0:00:05.603 **** 2025-09-20 09:32:19.137955 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.137965 | orchestrator | 2025-09-20 09:32:19.137976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.137987 | orchestrator | Saturday 20 September 2025 09:32:17 +0000 (0:00:00.199) 0:00:05.803 **** 2025-09-20 09:32:19.137997 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.138008 | orchestrator | 2025-09-20 09:32:19.138077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.138091 | orchestrator | Saturday 20 September 2025 09:32:17 +0000 (0:00:00.223) 0:00:06.027 **** 2025-09-20 09:32:19.138102 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.138113 | orchestrator | 2025-09-20 09:32:19.138124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.138135 | orchestrator | Saturday 20 September 2025 09:32:18 +0000 (0:00:00.208) 0:00:06.235 **** 2025-09-20 09:32:19.138145 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.138156 | orchestrator | 2025-09-20 09:32:19.138167 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.138178 | orchestrator | Saturday 20 September 2025 09:32:18 +0000 (0:00:00.215) 0:00:06.451 **** 2025-09-20 09:32:19.138189 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.138200 | orchestrator | 2025-09-20 09:32:19.138218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.138229 | orchestrator | Saturday 20 September 2025 09:32:18 +0000 (0:00:00.197) 0:00:06.649 **** 2025-09-20 09:32:19.138239 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.138250 | orchestrator | 2025-09-20 09:32:19.138261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.138272 | orchestrator | Saturday 20 September 2025 09:32:18 +0000 (0:00:00.212) 0:00:06.861 **** 2025-09-20 09:32:19.138283 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:19.138294 | orchestrator | 2025-09-20 09:32:19.138304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:19.138315 | orchestrator | Saturday 20 September 2025 09:32:18 +0000 (0:00:00.243) 0:00:07.105 **** 2025-09-20 09:32:19.138334 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.867718 | orchestrator | 2025-09-20 09:32:26.867823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:26.867839 | orchestrator | Saturday 20 September 2025 09:32:19 +0000 (0:00:00.201) 0:00:07.306 **** 2025-09-20 09:32:26.867851 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-20 09:32:26.867863 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-20 09:32:26.867874 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-20 09:32:26.867885 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-20 09:32:26.867896 | orchestrator | 2025-09-20 09:32:26.867907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:26.867918 | orchestrator | Saturday 20 September 2025 09:32:20 +0000 (0:00:01.011) 0:00:08.318 **** 2025-09-20 09:32:26.867928 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.867939 | orchestrator | 2025-09-20 09:32:26.867950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:26.867961 | orchestrator | Saturday 20 September 2025 09:32:20 +0000 (0:00:00.210) 0:00:08.528 **** 2025-09-20 09:32:26.867971 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.867982 | orchestrator | 2025-09-20 09:32:26.867993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:26.868003 | orchestrator | Saturday 20 September 2025 09:32:20 +0000 (0:00:00.201) 0:00:08.729 **** 2025-09-20 09:32:26.868014 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868025 | orchestrator | 2025-09-20 09:32:26.868035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:26.868046 | orchestrator | Saturday 20 September 2025 09:32:20 +0000 (0:00:00.201) 0:00:08.931 **** 2025-09-20 09:32:26.868056 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868067 | orchestrator | 2025-09-20 09:32:26.868078 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-20 09:32:26.868089 | orchestrator | Saturday 20 September 2025 09:32:20 +0000 (0:00:00.217) 0:00:09.148 **** 2025-09-20 09:32:26.868099 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-20 09:32:26.868110 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-20 09:32:26.868121 | orchestrator | 2025-09-20 09:32:26.868132 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-20 09:32:26.868142 | orchestrator | Saturday 20 September 2025 09:32:21 +0000 (0:00:00.194) 0:00:09.342 **** 2025-09-20 09:32:26.868171 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868182 | orchestrator | 2025-09-20 09:32:26.868193 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-20 09:32:26.868203 | orchestrator | Saturday 20 September 2025 09:32:21 +0000 (0:00:00.140) 0:00:09.483 **** 2025-09-20 09:32:26.868214 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868225 | orchestrator | 2025-09-20 09:32:26.868235 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-20 09:32:26.868246 | orchestrator | Saturday 20 September 2025 09:32:21 +0000 (0:00:00.153) 0:00:09.637 **** 2025-09-20 09:32:26.868259 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868297 | orchestrator | 2025-09-20 09:32:26.868310 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-20 09:32:26.868322 | orchestrator | Saturday 20 September 2025 09:32:21 +0000 (0:00:00.136) 0:00:09.773 **** 2025-09-20 09:32:26.868335 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:32:26.868347 | orchestrator | 2025-09-20 09:32:26.868359 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-20 09:32:26.868371 | orchestrator | Saturday 20 September 2025 09:32:21 +0000 (0:00:00.154) 0:00:09.927 **** 2025-09-20 09:32:26.868385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}}) 2025-09-20 09:32:26.868397 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f5012b99-8722-5cc3-9d11-b95ce6d4943a'}}) 2025-09-20 09:32:26.868409 | orchestrator | 2025-09-20 09:32:26.868421 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-20 09:32:26.868433 | orchestrator | Saturday 20 September 2025 09:32:21 +0000 (0:00:00.168) 0:00:10.096 **** 2025-09-20 09:32:26.868446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}})  2025-09-20 09:32:26.868466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f5012b99-8722-5cc3-9d11-b95ce6d4943a'}})  2025-09-20 09:32:26.868514 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868537 | orchestrator | 2025-09-20 09:32:26.868556 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-20 09:32:26.868572 | orchestrator | Saturday 20 September 2025 09:32:22 +0000 (0:00:00.151) 0:00:10.248 **** 2025-09-20 09:32:26.868585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}})  2025-09-20 09:32:26.868597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f5012b99-8722-5cc3-9d11-b95ce6d4943a'}})  2025-09-20 09:32:26.868609 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868620 | orchestrator | 2025-09-20 09:32:26.868630 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-20 09:32:26.868641 | orchestrator | Saturday 20 September 2025 09:32:22 +0000 (0:00:00.339) 0:00:10.587 **** 2025-09-20 09:32:26.868652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}})  2025-09-20 09:32:26.868662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f5012b99-8722-5cc3-9d11-b95ce6d4943a'}})  2025-09-20 09:32:26.868673 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868684 | orchestrator | 2025-09-20 09:32:26.868710 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-20 09:32:26.868722 | orchestrator | Saturday 20 September 2025 09:32:22 +0000 (0:00:00.168) 0:00:10.756 **** 2025-09-20 09:32:26.868733 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:32:26.868743 | orchestrator | 2025-09-20 09:32:26.868760 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-20 09:32:26.868771 | orchestrator | Saturday 20 September 2025 09:32:22 +0000 (0:00:00.159) 0:00:10.915 **** 2025-09-20 09:32:26.868781 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:32:26.868792 | orchestrator | 2025-09-20 09:32:26.868803 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-20 09:32:26.868813 | orchestrator | Saturday 20 September 2025 09:32:22 +0000 (0:00:00.157) 0:00:11.072 **** 2025-09-20 09:32:26.868824 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868834 | orchestrator | 2025-09-20 09:32:26.868845 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-20 09:32:26.868856 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.149) 0:00:11.222 **** 2025-09-20 09:32:26.868866 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868877 | orchestrator | 2025-09-20 09:32:26.868896 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-20 09:32:26.868907 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.132) 0:00:11.354 **** 2025-09-20 09:32:26.868918 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.868929 | orchestrator | 2025-09-20 09:32:26.868940 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-20 09:32:26.868950 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.139) 0:00:11.493 **** 2025-09-20 09:32:26.868961 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:32:26.868971 | orchestrator |  "ceph_osd_devices": { 2025-09-20 09:32:26.868982 | orchestrator |  "sdb": { 2025-09-20 09:32:26.868994 | orchestrator |  "osd_lvm_uuid": "0cf3001a-a2bc-51f5-b2f0-80e0839adf22" 2025-09-20 09:32:26.869005 | orchestrator |  }, 2025-09-20 09:32:26.869016 | orchestrator |  "sdc": { 2025-09-20 09:32:26.869026 | orchestrator |  "osd_lvm_uuid": "f5012b99-8722-5cc3-9d11-b95ce6d4943a" 2025-09-20 09:32:26.869037 | orchestrator |  } 2025-09-20 09:32:26.869048 | orchestrator |  } 2025-09-20 09:32:26.869059 | orchestrator | } 2025-09-20 09:32:26.869069 | orchestrator | 2025-09-20 09:32:26.869080 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-20 09:32:26.869091 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.148) 0:00:11.642 **** 2025-09-20 09:32:26.869102 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.869112 | orchestrator | 2025-09-20 09:32:26.869123 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-20 09:32:26.869133 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.135) 0:00:11.777 **** 2025-09-20 09:32:26.869144 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.869155 | orchestrator | 2025-09-20 09:32:26.869165 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-20 09:32:26.869176 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.136) 0:00:11.914 **** 2025-09-20 09:32:26.869186 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:32:26.869197 | orchestrator | 2025-09-20 09:32:26.869207 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-20 09:32:26.869218 | orchestrator | Saturday 20 September 2025 09:32:23 +0000 (0:00:00.147) 0:00:12.061 **** 2025-09-20 09:32:26.869229 | orchestrator | changed: [testbed-node-3] => { 2025-09-20 09:32:26.869239 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-20 09:32:26.869250 | orchestrator |  "ceph_osd_devices": { 2025-09-20 09:32:26.869260 | orchestrator |  "sdb": { 2025-09-20 09:32:26.869271 | orchestrator |  "osd_lvm_uuid": "0cf3001a-a2bc-51f5-b2f0-80e0839adf22" 2025-09-20 09:32:26.869282 | orchestrator |  }, 2025-09-20 09:32:26.869293 | orchestrator |  "sdc": { 2025-09-20 09:32:26.869303 | orchestrator |  "osd_lvm_uuid": "f5012b99-8722-5cc3-9d11-b95ce6d4943a" 2025-09-20 09:32:26.869314 | orchestrator |  } 2025-09-20 09:32:26.869325 | orchestrator |  }, 2025-09-20 09:32:26.869336 | orchestrator |  "lvm_volumes": [ 2025-09-20 09:32:26.869346 | orchestrator |  { 2025-09-20 09:32:26.869357 | orchestrator |  "data": "osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22", 2025-09-20 09:32:26.869368 | orchestrator |  "data_vg": "ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22" 2025-09-20 09:32:26.869378 | orchestrator |  }, 2025-09-20 09:32:26.869389 | orchestrator |  { 2025-09-20 09:32:26.869400 | orchestrator |  "data": "osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a", 2025-09-20 09:32:26.869410 | orchestrator |  "data_vg": "ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a" 2025-09-20 09:32:26.869421 | orchestrator |  } 2025-09-20 09:32:26.869432 | orchestrator |  ] 2025-09-20 09:32:26.869442 | orchestrator |  } 2025-09-20 09:32:26.869453 | orchestrator | } 2025-09-20 09:32:26.869464 | orchestrator | 2025-09-20 09:32:26.869512 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-20 09:32:26.869540 | orchestrator | Saturday 20 September 2025 09:32:24 +0000 (0:00:00.219) 0:00:12.281 **** 2025-09-20 09:32:26.869551 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 09:32:26.869562 | orchestrator | 2025-09-20 09:32:26.869573 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-20 09:32:26.869583 | orchestrator | 2025-09-20 09:32:26.869594 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 09:32:26.869604 | orchestrator | Saturday 20 September 2025 09:32:26 +0000 (0:00:02.265) 0:00:14.547 **** 2025-09-20 09:32:26.869615 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-20 09:32:26.869625 | orchestrator | 2025-09-20 09:32:26.869636 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 09:32:26.869646 | orchestrator | Saturday 20 September 2025 09:32:26 +0000 (0:00:00.261) 0:00:14.808 **** 2025-09-20 09:32:26.869657 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:32:26.869668 | orchestrator | 2025-09-20 09:32:26.869678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:26.869696 | orchestrator | Saturday 20 September 2025 09:32:26 +0000 (0:00:00.228) 0:00:15.037 **** 2025-09-20 09:32:33.872463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-20 09:32:33.872597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-20 09:32:33.872614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-20 09:32:33.872626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-20 09:32:33.872637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-20 09:32:33.872649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-20 09:32:33.872660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-20 09:32:33.872671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-20 09:32:33.872682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-20 09:32:33.872693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-20 09:32:33.872704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-20 09:32:33.872715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-20 09:32:33.872726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-20 09:32:33.872741 | orchestrator | 2025-09-20 09:32:33.872754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.872766 | orchestrator | Saturday 20 September 2025 09:32:27 +0000 (0:00:00.403) 0:00:15.441 **** 2025-09-20 09:32:33.872778 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.872790 | orchestrator | 2025-09-20 09:32:33.872801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.872812 | orchestrator | Saturday 20 September 2025 09:32:27 +0000 (0:00:00.223) 0:00:15.664 **** 2025-09-20 09:32:33.872823 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.872833 | orchestrator | 2025-09-20 09:32:33.872844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.872855 | orchestrator | Saturday 20 September 2025 09:32:27 +0000 (0:00:00.221) 0:00:15.885 **** 2025-09-20 09:32:33.872866 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.872877 | orchestrator | 2025-09-20 09:32:33.872888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.872899 | orchestrator | Saturday 20 September 2025 09:32:27 +0000 (0:00:00.186) 0:00:16.072 **** 2025-09-20 09:32:33.872910 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.872944 | orchestrator | 2025-09-20 09:32:33.872956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.872967 | orchestrator | Saturday 20 September 2025 09:32:28 +0000 (0:00:00.184) 0:00:16.257 **** 2025-09-20 09:32:33.872977 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.872988 | orchestrator | 2025-09-20 09:32:33.872999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873010 | orchestrator | Saturday 20 September 2025 09:32:28 +0000 (0:00:00.456) 0:00:16.713 **** 2025-09-20 09:32:33.873020 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873031 | orchestrator | 2025-09-20 09:32:33.873042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873053 | orchestrator | Saturday 20 September 2025 09:32:28 +0000 (0:00:00.174) 0:00:16.887 **** 2025-09-20 09:32:33.873080 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873091 | orchestrator | 2025-09-20 09:32:33.873102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873113 | orchestrator | Saturday 20 September 2025 09:32:28 +0000 (0:00:00.192) 0:00:17.080 **** 2025-09-20 09:32:33.873123 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873134 | orchestrator | 2025-09-20 09:32:33.873145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873156 | orchestrator | Saturday 20 September 2025 09:32:29 +0000 (0:00:00.201) 0:00:17.281 **** 2025-09-20 09:32:33.873167 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b) 2025-09-20 09:32:33.873179 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b) 2025-09-20 09:32:33.873190 | orchestrator | 2025-09-20 09:32:33.873201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873212 | orchestrator | Saturday 20 September 2025 09:32:29 +0000 (0:00:00.372) 0:00:17.654 **** 2025-09-20 09:32:33.873222 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a) 2025-09-20 09:32:33.873233 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a) 2025-09-20 09:32:33.873244 | orchestrator | 2025-09-20 09:32:33.873255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873266 | orchestrator | Saturday 20 September 2025 09:32:29 +0000 (0:00:00.385) 0:00:18.039 **** 2025-09-20 09:32:33.873276 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9) 2025-09-20 09:32:33.873287 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9) 2025-09-20 09:32:33.873298 | orchestrator | 2025-09-20 09:32:33.873309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873320 | orchestrator | Saturday 20 September 2025 09:32:30 +0000 (0:00:00.419) 0:00:18.459 **** 2025-09-20 09:32:33.873350 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26) 2025-09-20 09:32:33.873363 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26) 2025-09-20 09:32:33.873373 | orchestrator | 2025-09-20 09:32:33.873384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:33.873395 | orchestrator | Saturday 20 September 2025 09:32:30 +0000 (0:00:00.402) 0:00:18.861 **** 2025-09-20 09:32:33.873406 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 09:32:33.873416 | orchestrator | 2025-09-20 09:32:33.873427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873438 | orchestrator | Saturday 20 September 2025 09:32:31 +0000 (0:00:00.338) 0:00:19.199 **** 2025-09-20 09:32:33.873448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-20 09:32:33.873467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-20 09:32:33.873478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-20 09:32:33.873517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-20 09:32:33.873528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-20 09:32:33.873538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-20 09:32:33.873549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-20 09:32:33.873560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-20 09:32:33.873571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-20 09:32:33.873581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-20 09:32:33.873592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-20 09:32:33.873603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-20 09:32:33.873613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-20 09:32:33.873624 | orchestrator | 2025-09-20 09:32:33.873635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873646 | orchestrator | Saturday 20 September 2025 09:32:31 +0000 (0:00:00.350) 0:00:19.550 **** 2025-09-20 09:32:33.873657 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873667 | orchestrator | 2025-09-20 09:32:33.873678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873689 | orchestrator | Saturday 20 September 2025 09:32:31 +0000 (0:00:00.180) 0:00:19.731 **** 2025-09-20 09:32:33.873700 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873711 | orchestrator | 2025-09-20 09:32:33.873728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873739 | orchestrator | Saturday 20 September 2025 09:32:31 +0000 (0:00:00.441) 0:00:20.172 **** 2025-09-20 09:32:33.873750 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873760 | orchestrator | 2025-09-20 09:32:33.873771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873782 | orchestrator | Saturday 20 September 2025 09:32:32 +0000 (0:00:00.176) 0:00:20.349 **** 2025-09-20 09:32:33.873793 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873803 | orchestrator | 2025-09-20 09:32:33.873814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873825 | orchestrator | Saturday 20 September 2025 09:32:32 +0000 (0:00:00.176) 0:00:20.525 **** 2025-09-20 09:32:33.873836 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873847 | orchestrator | 2025-09-20 09:32:33.873857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873868 | orchestrator | Saturday 20 September 2025 09:32:32 +0000 (0:00:00.177) 0:00:20.702 **** 2025-09-20 09:32:33.873879 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873890 | orchestrator | 2025-09-20 09:32:33.873900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873911 | orchestrator | Saturday 20 September 2025 09:32:32 +0000 (0:00:00.189) 0:00:20.892 **** 2025-09-20 09:32:33.873922 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873933 | orchestrator | 2025-09-20 09:32:33.873944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.873954 | orchestrator | Saturday 20 September 2025 09:32:32 +0000 (0:00:00.196) 0:00:21.089 **** 2025-09-20 09:32:33.873965 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.873976 | orchestrator | 2025-09-20 09:32:33.873986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.874004 | orchestrator | Saturday 20 September 2025 09:32:33 +0000 (0:00:00.176) 0:00:21.266 **** 2025-09-20 09:32:33.874068 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-20 09:32:33.874083 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-20 09:32:33.874094 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-20 09:32:33.874105 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-20 09:32:33.874116 | orchestrator | 2025-09-20 09:32:33.874127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:33.874138 | orchestrator | Saturday 20 September 2025 09:32:33 +0000 (0:00:00.590) 0:00:21.856 **** 2025-09-20 09:32:33.874149 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:33.874160 | orchestrator | 2025-09-20 09:32:33.874179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:39.361441 | orchestrator | Saturday 20 September 2025 09:32:33 +0000 (0:00:00.187) 0:00:22.044 **** 2025-09-20 09:32:39.361602 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.361619 | orchestrator | 2025-09-20 09:32:39.361631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:39.361643 | orchestrator | Saturday 20 September 2025 09:32:34 +0000 (0:00:00.188) 0:00:22.233 **** 2025-09-20 09:32:39.361654 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.361665 | orchestrator | 2025-09-20 09:32:39.361676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:39.361687 | orchestrator | Saturday 20 September 2025 09:32:34 +0000 (0:00:00.170) 0:00:22.403 **** 2025-09-20 09:32:39.361697 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.361708 | orchestrator | 2025-09-20 09:32:39.361719 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-20 09:32:39.361730 | orchestrator | Saturday 20 September 2025 09:32:34 +0000 (0:00:00.179) 0:00:22.583 **** 2025-09-20 09:32:39.361740 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-20 09:32:39.361751 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-20 09:32:39.361762 | orchestrator | 2025-09-20 09:32:39.361773 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-20 09:32:39.361783 | orchestrator | Saturday 20 September 2025 09:32:34 +0000 (0:00:00.340) 0:00:22.924 **** 2025-09-20 09:32:39.361794 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.361805 | orchestrator | 2025-09-20 09:32:39.361816 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-20 09:32:39.361827 | orchestrator | Saturday 20 September 2025 09:32:34 +0000 (0:00:00.155) 0:00:23.079 **** 2025-09-20 09:32:39.361838 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.361849 | orchestrator | 2025-09-20 09:32:39.361859 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-20 09:32:39.361870 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.106) 0:00:23.186 **** 2025-09-20 09:32:39.361881 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.361891 | orchestrator | 2025-09-20 09:32:39.361902 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-20 09:32:39.361913 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.125) 0:00:23.312 **** 2025-09-20 09:32:39.361924 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:32:39.361935 | orchestrator | 2025-09-20 09:32:39.361946 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-20 09:32:39.361957 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.122) 0:00:23.434 **** 2025-09-20 09:32:39.361968 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}}) 2025-09-20 09:32:39.361980 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}}) 2025-09-20 09:32:39.361993 | orchestrator | 2025-09-20 09:32:39.362005 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-20 09:32:39.362096 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.170) 0:00:23.604 **** 2025-09-20 09:32:39.362111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}})  2025-09-20 09:32:39.362125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}})  2025-09-20 09:32:39.362137 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362149 | orchestrator | 2025-09-20 09:32:39.362178 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-20 09:32:39.362191 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.133) 0:00:23.738 **** 2025-09-20 09:32:39.362204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}})  2025-09-20 09:32:39.362216 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}})  2025-09-20 09:32:39.362229 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362240 | orchestrator | 2025-09-20 09:32:39.362252 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-20 09:32:39.362264 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.137) 0:00:23.876 **** 2025-09-20 09:32:39.362276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}})  2025-09-20 09:32:39.362288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}})  2025-09-20 09:32:39.362302 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362314 | orchestrator | 2025-09-20 09:32:39.362327 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-20 09:32:39.362339 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.127) 0:00:24.003 **** 2025-09-20 09:32:39.362350 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:32:39.362361 | orchestrator | 2025-09-20 09:32:39.362371 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-20 09:32:39.362382 | orchestrator | Saturday 20 September 2025 09:32:35 +0000 (0:00:00.111) 0:00:24.115 **** 2025-09-20 09:32:39.362393 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:32:39.362403 | orchestrator | 2025-09-20 09:32:39.362414 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-20 09:32:39.362425 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.116) 0:00:24.232 **** 2025-09-20 09:32:39.362435 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362446 | orchestrator | 2025-09-20 09:32:39.362476 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-20 09:32:39.362508 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.127) 0:00:24.359 **** 2025-09-20 09:32:39.362520 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362531 | orchestrator | 2025-09-20 09:32:39.362542 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-20 09:32:39.362553 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.260) 0:00:24.620 **** 2025-09-20 09:32:39.362563 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362574 | orchestrator | 2025-09-20 09:32:39.362585 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-20 09:32:39.362595 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.102) 0:00:24.722 **** 2025-09-20 09:32:39.362606 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:32:39.362617 | orchestrator |  "ceph_osd_devices": { 2025-09-20 09:32:39.362628 | orchestrator |  "sdb": { 2025-09-20 09:32:39.362639 | orchestrator |  "osd_lvm_uuid": "6319afae-7c48-5c70-87a8-62ab4a9b6a4c" 2025-09-20 09:32:39.362651 | orchestrator |  }, 2025-09-20 09:32:39.362662 | orchestrator |  "sdc": { 2025-09-20 09:32:39.362681 | orchestrator |  "osd_lvm_uuid": "606172b3-e8d7-56e6-aaf4-86ed1800c0e9" 2025-09-20 09:32:39.362692 | orchestrator |  } 2025-09-20 09:32:39.362703 | orchestrator |  } 2025-09-20 09:32:39.362714 | orchestrator | } 2025-09-20 09:32:39.362725 | orchestrator | 2025-09-20 09:32:39.362736 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-20 09:32:39.362747 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.121) 0:00:24.843 **** 2025-09-20 09:32:39.362757 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362768 | orchestrator | 2025-09-20 09:32:39.362779 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-20 09:32:39.362790 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.126) 0:00:24.970 **** 2025-09-20 09:32:39.362801 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362811 | orchestrator | 2025-09-20 09:32:39.362822 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-20 09:32:39.362833 | orchestrator | Saturday 20 September 2025 09:32:36 +0000 (0:00:00.112) 0:00:25.082 **** 2025-09-20 09:32:39.362843 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:32:39.362854 | orchestrator | 2025-09-20 09:32:39.362865 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-20 09:32:39.362876 | orchestrator | Saturday 20 September 2025 09:32:37 +0000 (0:00:00.115) 0:00:25.197 **** 2025-09-20 09:32:39.362886 | orchestrator | changed: [testbed-node-4] => { 2025-09-20 09:32:39.362897 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-20 09:32:39.362908 | orchestrator |  "ceph_osd_devices": { 2025-09-20 09:32:39.362919 | orchestrator |  "sdb": { 2025-09-20 09:32:39.362930 | orchestrator |  "osd_lvm_uuid": "6319afae-7c48-5c70-87a8-62ab4a9b6a4c" 2025-09-20 09:32:39.362941 | orchestrator |  }, 2025-09-20 09:32:39.362952 | orchestrator |  "sdc": { 2025-09-20 09:32:39.362963 | orchestrator |  "osd_lvm_uuid": "606172b3-e8d7-56e6-aaf4-86ed1800c0e9" 2025-09-20 09:32:39.362973 | orchestrator |  } 2025-09-20 09:32:39.362984 | orchestrator |  }, 2025-09-20 09:32:39.362995 | orchestrator |  "lvm_volumes": [ 2025-09-20 09:32:39.363006 | orchestrator |  { 2025-09-20 09:32:39.363017 | orchestrator |  "data": "osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c", 2025-09-20 09:32:39.363028 | orchestrator |  "data_vg": "ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c" 2025-09-20 09:32:39.363039 | orchestrator |  }, 2025-09-20 09:32:39.363049 | orchestrator |  { 2025-09-20 09:32:39.363060 | orchestrator |  "data": "osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9", 2025-09-20 09:32:39.363071 | orchestrator |  "data_vg": "ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9" 2025-09-20 09:32:39.363081 | orchestrator |  } 2025-09-20 09:32:39.363092 | orchestrator |  ] 2025-09-20 09:32:39.363103 | orchestrator |  } 2025-09-20 09:32:39.363114 | orchestrator | } 2025-09-20 09:32:39.363125 | orchestrator | 2025-09-20 09:32:39.363135 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-20 09:32:39.363146 | orchestrator | Saturday 20 September 2025 09:32:37 +0000 (0:00:00.179) 0:00:25.377 **** 2025-09-20 09:32:39.363157 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-20 09:32:39.363167 | orchestrator | 2025-09-20 09:32:39.363178 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-20 09:32:39.363189 | orchestrator | 2025-09-20 09:32:39.363200 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 09:32:39.363210 | orchestrator | Saturday 20 September 2025 09:32:38 +0000 (0:00:00.918) 0:00:26.295 **** 2025-09-20 09:32:39.363221 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-20 09:32:39.363231 | orchestrator | 2025-09-20 09:32:39.363242 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 09:32:39.363253 | orchestrator | Saturday 20 September 2025 09:32:38 +0000 (0:00:00.381) 0:00:26.676 **** 2025-09-20 09:32:39.363270 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:32:39.363281 | orchestrator | 2025-09-20 09:32:39.363298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:39.363309 | orchestrator | Saturday 20 September 2025 09:32:38 +0000 (0:00:00.488) 0:00:27.165 **** 2025-09-20 09:32:39.363320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-20 09:32:39.363331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-20 09:32:39.363342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-20 09:32:39.363352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-20 09:32:39.363363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-20 09:32:39.363374 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-20 09:32:39.363390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-20 09:32:46.920478 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-20 09:32:46.920630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-20 09:32:46.920645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-20 09:32:46.920656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-20 09:32:46.920668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-20 09:32:46.920678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-20 09:32:46.920689 | orchestrator | 2025-09-20 09:32:46.920702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920713 | orchestrator | Saturday 20 September 2025 09:32:39 +0000 (0:00:00.364) 0:00:27.530 **** 2025-09-20 09:32:46.920724 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.920736 | orchestrator | 2025-09-20 09:32:46.920746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920757 | orchestrator | Saturday 20 September 2025 09:32:39 +0000 (0:00:00.166) 0:00:27.696 **** 2025-09-20 09:32:46.920768 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.920779 | orchestrator | 2025-09-20 09:32:46.920789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920800 | orchestrator | Saturday 20 September 2025 09:32:39 +0000 (0:00:00.188) 0:00:27.885 **** 2025-09-20 09:32:46.920811 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.920821 | orchestrator | 2025-09-20 09:32:46.920832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920842 | orchestrator | Saturday 20 September 2025 09:32:39 +0000 (0:00:00.212) 0:00:28.097 **** 2025-09-20 09:32:46.920853 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.920864 | orchestrator | 2025-09-20 09:32:46.920875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920885 | orchestrator | Saturday 20 September 2025 09:32:40 +0000 (0:00:00.182) 0:00:28.280 **** 2025-09-20 09:32:46.920896 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.920907 | orchestrator | 2025-09-20 09:32:46.920917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920928 | orchestrator | Saturday 20 September 2025 09:32:40 +0000 (0:00:00.166) 0:00:28.446 **** 2025-09-20 09:32:46.920939 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.920949 | orchestrator | 2025-09-20 09:32:46.920960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.920971 | orchestrator | Saturday 20 September 2025 09:32:40 +0000 (0:00:00.165) 0:00:28.611 **** 2025-09-20 09:32:46.920982 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921015 | orchestrator | 2025-09-20 09:32:46.921029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.921042 | orchestrator | Saturday 20 September 2025 09:32:40 +0000 (0:00:00.160) 0:00:28.772 **** 2025-09-20 09:32:46.921055 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921067 | orchestrator | 2025-09-20 09:32:46.921079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.921091 | orchestrator | Saturday 20 September 2025 09:32:40 +0000 (0:00:00.174) 0:00:28.947 **** 2025-09-20 09:32:46.921104 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8) 2025-09-20 09:32:46.921116 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8) 2025-09-20 09:32:46.921129 | orchestrator | 2025-09-20 09:32:46.921141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.921153 | orchestrator | Saturday 20 September 2025 09:32:41 +0000 (0:00:00.538) 0:00:29.485 **** 2025-09-20 09:32:46.921166 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57) 2025-09-20 09:32:46.921178 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57) 2025-09-20 09:32:46.921191 | orchestrator | 2025-09-20 09:32:46.921203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.921215 | orchestrator | Saturday 20 September 2025 09:32:42 +0000 (0:00:00.719) 0:00:30.205 **** 2025-09-20 09:32:46.921227 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5) 2025-09-20 09:32:46.921240 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5) 2025-09-20 09:32:46.921252 | orchestrator | 2025-09-20 09:32:46.921264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.921276 | orchestrator | Saturday 20 September 2025 09:32:42 +0000 (0:00:00.395) 0:00:30.601 **** 2025-09-20 09:32:46.921288 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d) 2025-09-20 09:32:46.921300 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d) 2025-09-20 09:32:46.921312 | orchestrator | 2025-09-20 09:32:46.921325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:32:46.921338 | orchestrator | Saturday 20 September 2025 09:32:42 +0000 (0:00:00.419) 0:00:31.020 **** 2025-09-20 09:32:46.921351 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 09:32:46.921363 | orchestrator | 2025-09-20 09:32:46.921376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921386 | orchestrator | Saturday 20 September 2025 09:32:43 +0000 (0:00:00.330) 0:00:31.350 **** 2025-09-20 09:32:46.921413 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-20 09:32:46.921425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-20 09:32:46.921435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-20 09:32:46.921446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-20 09:32:46.921456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-20 09:32:46.921466 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-20 09:32:46.921494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-20 09:32:46.921525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-20 09:32:46.921537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-20 09:32:46.921558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-20 09:32:46.921569 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-20 09:32:46.921580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-20 09:32:46.921590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-20 09:32:46.921601 | orchestrator | 2025-09-20 09:32:46.921612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921622 | orchestrator | Saturday 20 September 2025 09:32:43 +0000 (0:00:00.380) 0:00:31.730 **** 2025-09-20 09:32:46.921633 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921644 | orchestrator | 2025-09-20 09:32:46.921654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921665 | orchestrator | Saturday 20 September 2025 09:32:43 +0000 (0:00:00.180) 0:00:31.911 **** 2025-09-20 09:32:46.921675 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921686 | orchestrator | 2025-09-20 09:32:46.921697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921708 | orchestrator | Saturday 20 September 2025 09:32:43 +0000 (0:00:00.186) 0:00:32.097 **** 2025-09-20 09:32:46.921718 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921729 | orchestrator | 2025-09-20 09:32:46.921745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921756 | orchestrator | Saturday 20 September 2025 09:32:44 +0000 (0:00:00.187) 0:00:32.285 **** 2025-09-20 09:32:46.921767 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921777 | orchestrator | 2025-09-20 09:32:46.921788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921799 | orchestrator | Saturday 20 September 2025 09:32:44 +0000 (0:00:00.194) 0:00:32.480 **** 2025-09-20 09:32:46.921809 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921820 | orchestrator | 2025-09-20 09:32:46.921830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921841 | orchestrator | Saturday 20 September 2025 09:32:44 +0000 (0:00:00.230) 0:00:32.710 **** 2025-09-20 09:32:46.921851 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921862 | orchestrator | 2025-09-20 09:32:46.921873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921883 | orchestrator | Saturday 20 September 2025 09:32:45 +0000 (0:00:00.488) 0:00:33.198 **** 2025-09-20 09:32:46.921894 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921904 | orchestrator | 2025-09-20 09:32:46.921915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921926 | orchestrator | Saturday 20 September 2025 09:32:45 +0000 (0:00:00.199) 0:00:33.398 **** 2025-09-20 09:32:46.921936 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.921947 | orchestrator | 2025-09-20 09:32:46.921958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.921968 | orchestrator | Saturday 20 September 2025 09:32:45 +0000 (0:00:00.192) 0:00:33.591 **** 2025-09-20 09:32:46.921979 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-20 09:32:46.921990 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-20 09:32:46.922001 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-20 09:32:46.922011 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-20 09:32:46.922084 | orchestrator | 2025-09-20 09:32:46.922095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.922106 | orchestrator | Saturday 20 September 2025 09:32:46 +0000 (0:00:00.663) 0:00:34.255 **** 2025-09-20 09:32:46.922117 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.922127 | orchestrator | 2025-09-20 09:32:46.922138 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.922156 | orchestrator | Saturday 20 September 2025 09:32:46 +0000 (0:00:00.214) 0:00:34.470 **** 2025-09-20 09:32:46.922167 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.922177 | orchestrator | 2025-09-20 09:32:46.922188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.922199 | orchestrator | Saturday 20 September 2025 09:32:46 +0000 (0:00:00.215) 0:00:34.685 **** 2025-09-20 09:32:46.922209 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.922220 | orchestrator | 2025-09-20 09:32:46.922230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:32:46.922241 | orchestrator | Saturday 20 September 2025 09:32:46 +0000 (0:00:00.222) 0:00:34.908 **** 2025-09-20 09:32:46.922251 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:46.922262 | orchestrator | 2025-09-20 09:32:46.922272 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-20 09:32:46.922290 | orchestrator | Saturday 20 September 2025 09:32:46 +0000 (0:00:00.181) 0:00:35.089 **** 2025-09-20 09:32:51.045918 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-20 09:32:51.046013 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-20 09:32:51.046078 | orchestrator | 2025-09-20 09:32:51.046088 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-20 09:32:51.046095 | orchestrator | Saturday 20 September 2025 09:32:47 +0000 (0:00:00.160) 0:00:35.250 **** 2025-09-20 09:32:51.046102 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046110 | orchestrator | 2025-09-20 09:32:51.046117 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-20 09:32:51.046125 | orchestrator | Saturday 20 September 2025 09:32:47 +0000 (0:00:00.123) 0:00:35.373 **** 2025-09-20 09:32:51.046132 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046139 | orchestrator | 2025-09-20 09:32:51.046146 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-20 09:32:51.046153 | orchestrator | Saturday 20 September 2025 09:32:47 +0000 (0:00:00.133) 0:00:35.506 **** 2025-09-20 09:32:51.046160 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046167 | orchestrator | 2025-09-20 09:32:51.046174 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-20 09:32:51.046181 | orchestrator | Saturday 20 September 2025 09:32:47 +0000 (0:00:00.119) 0:00:35.626 **** 2025-09-20 09:32:51.046188 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:32:51.046196 | orchestrator | 2025-09-20 09:32:51.046203 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-20 09:32:51.046210 | orchestrator | Saturday 20 September 2025 09:32:47 +0000 (0:00:00.286) 0:00:35.912 **** 2025-09-20 09:32:51.046219 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a0e476ce-8dbb-5cb3-b205-e96c67f25126'}}) 2025-09-20 09:32:51.046226 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54d5d251-b5b9-5293-b72e-54d20a6e98e4'}}) 2025-09-20 09:32:51.046233 | orchestrator | 2025-09-20 09:32:51.046241 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-20 09:32:51.046248 | orchestrator | Saturday 20 September 2025 09:32:47 +0000 (0:00:00.170) 0:00:36.083 **** 2025-09-20 09:32:51.046255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a0e476ce-8dbb-5cb3-b205-e96c67f25126'}})  2025-09-20 09:32:51.046265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54d5d251-b5b9-5293-b72e-54d20a6e98e4'}})  2025-09-20 09:32:51.046272 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046279 | orchestrator | 2025-09-20 09:32:51.046287 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-20 09:32:51.046294 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.164) 0:00:36.247 **** 2025-09-20 09:32:51.046301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a0e476ce-8dbb-5cb3-b205-e96c67f25126'}})  2025-09-20 09:32:51.046331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54d5d251-b5b9-5293-b72e-54d20a6e98e4'}})  2025-09-20 09:32:51.046339 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046347 | orchestrator | 2025-09-20 09:32:51.046354 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-20 09:32:51.046361 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.173) 0:00:36.420 **** 2025-09-20 09:32:51.046368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a0e476ce-8dbb-5cb3-b205-e96c67f25126'}})  2025-09-20 09:32:51.046389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54d5d251-b5b9-5293-b72e-54d20a6e98e4'}})  2025-09-20 09:32:51.046396 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046403 | orchestrator | 2025-09-20 09:32:51.046410 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-20 09:32:51.046418 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.127) 0:00:36.548 **** 2025-09-20 09:32:51.046425 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:32:51.046432 | orchestrator | 2025-09-20 09:32:51.046439 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-20 09:32:51.046446 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.117) 0:00:36.666 **** 2025-09-20 09:32:51.046453 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:32:51.046460 | orchestrator | 2025-09-20 09:32:51.046469 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-20 09:32:51.046477 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.112) 0:00:36.778 **** 2025-09-20 09:32:51.046485 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046494 | orchestrator | 2025-09-20 09:32:51.046530 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-20 09:32:51.046540 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.133) 0:00:36.912 **** 2025-09-20 09:32:51.046548 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046557 | orchestrator | 2025-09-20 09:32:51.046565 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-20 09:32:51.046573 | orchestrator | Saturday 20 September 2025 09:32:48 +0000 (0:00:00.155) 0:00:37.067 **** 2025-09-20 09:32:51.046582 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046590 | orchestrator | 2025-09-20 09:32:51.046598 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-20 09:32:51.046606 | orchestrator | Saturday 20 September 2025 09:32:49 +0000 (0:00:00.150) 0:00:37.217 **** 2025-09-20 09:32:51.046615 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:32:51.046626 | orchestrator |  "ceph_osd_devices": { 2025-09-20 09:32:51.046640 | orchestrator |  "sdb": { 2025-09-20 09:32:51.046653 | orchestrator |  "osd_lvm_uuid": "a0e476ce-8dbb-5cb3-b205-e96c67f25126" 2025-09-20 09:32:51.046685 | orchestrator |  }, 2025-09-20 09:32:51.046699 | orchestrator |  "sdc": { 2025-09-20 09:32:51.046713 | orchestrator |  "osd_lvm_uuid": "54d5d251-b5b9-5293-b72e-54d20a6e98e4" 2025-09-20 09:32:51.046726 | orchestrator |  } 2025-09-20 09:32:51.046740 | orchestrator |  } 2025-09-20 09:32:51.046754 | orchestrator | } 2025-09-20 09:32:51.046767 | orchestrator | 2025-09-20 09:32:51.046781 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-20 09:32:51.046795 | orchestrator | Saturday 20 September 2025 09:32:49 +0000 (0:00:00.144) 0:00:37.361 **** 2025-09-20 09:32:51.046808 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046821 | orchestrator | 2025-09-20 09:32:51.046833 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-20 09:32:51.046847 | orchestrator | Saturday 20 September 2025 09:32:49 +0000 (0:00:00.125) 0:00:37.487 **** 2025-09-20 09:32:51.046854 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046861 | orchestrator | 2025-09-20 09:32:51.046869 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-20 09:32:51.046885 | orchestrator | Saturday 20 September 2025 09:32:49 +0000 (0:00:00.352) 0:00:37.840 **** 2025-09-20 09:32:51.046892 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:32:51.046899 | orchestrator | 2025-09-20 09:32:51.046906 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-20 09:32:51.046913 | orchestrator | Saturday 20 September 2025 09:32:49 +0000 (0:00:00.138) 0:00:37.978 **** 2025-09-20 09:32:51.046920 | orchestrator | changed: [testbed-node-5] => { 2025-09-20 09:32:51.046927 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-20 09:32:51.046934 | orchestrator |  "ceph_osd_devices": { 2025-09-20 09:32:51.046942 | orchestrator |  "sdb": { 2025-09-20 09:32:51.046949 | orchestrator |  "osd_lvm_uuid": "a0e476ce-8dbb-5cb3-b205-e96c67f25126" 2025-09-20 09:32:51.046956 | orchestrator |  }, 2025-09-20 09:32:51.046963 | orchestrator |  "sdc": { 2025-09-20 09:32:51.046970 | orchestrator |  "osd_lvm_uuid": "54d5d251-b5b9-5293-b72e-54d20a6e98e4" 2025-09-20 09:32:51.046977 | orchestrator |  } 2025-09-20 09:32:51.046985 | orchestrator |  }, 2025-09-20 09:32:51.046992 | orchestrator |  "lvm_volumes": [ 2025-09-20 09:32:51.046999 | orchestrator |  { 2025-09-20 09:32:51.047006 | orchestrator |  "data": "osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126", 2025-09-20 09:32:51.047014 | orchestrator |  "data_vg": "ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126" 2025-09-20 09:32:51.047021 | orchestrator |  }, 2025-09-20 09:32:51.047028 | orchestrator |  { 2025-09-20 09:32:51.047035 | orchestrator |  "data": "osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4", 2025-09-20 09:32:51.047042 | orchestrator |  "data_vg": "ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4" 2025-09-20 09:32:51.047049 | orchestrator |  } 2025-09-20 09:32:51.047056 | orchestrator |  ] 2025-09-20 09:32:51.047063 | orchestrator |  } 2025-09-20 09:32:51.047074 | orchestrator | } 2025-09-20 09:32:51.047081 | orchestrator | 2025-09-20 09:32:51.047088 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-20 09:32:51.047095 | orchestrator | Saturday 20 September 2025 09:32:50 +0000 (0:00:00.247) 0:00:38.226 **** 2025-09-20 09:32:51.047102 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-20 09:32:51.047109 | orchestrator | 2025-09-20 09:32:51.047118 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:32:51.047131 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:32:51.047144 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:32:51.047156 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:32:51.047169 | orchestrator | 2025-09-20 09:32:51.047181 | orchestrator | 2025-09-20 09:32:51.047193 | orchestrator | 2025-09-20 09:32:51.047200 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:32:51.047207 | orchestrator | Saturday 20 September 2025 09:32:51 +0000 (0:00:00.982) 0:00:39.209 **** 2025-09-20 09:32:51.047214 | orchestrator | =============================================================================== 2025-09-20 09:32:51.047221 | orchestrator | Write configuration file ------------------------------------------------ 4.17s 2025-09-20 09:32:51.047228 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-09-20 09:32:51.047235 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2025-09-20 09:32:51.047242 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-09-20 09:32:51.047249 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2025-09-20 09:32:51.047265 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.88s 2025-09-20 09:32:51.047272 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-09-20 09:32:51.047279 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.70s 2025-09-20 09:32:51.047286 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-09-20 09:32:51.047293 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.65s 2025-09-20 09:32:51.047300 | orchestrator | Print configuration data ------------------------------------------------ 0.65s 2025-09-20 09:32:51.047307 | orchestrator | Print DB devices -------------------------------------------------------- 0.60s 2025-09-20 09:32:51.047314 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-09-20 09:32:51.047321 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-09-20 09:32:51.047335 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2025-09-20 09:32:51.433071 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-09-20 09:32:51.433148 | orchestrator | Set WAL devices config data --------------------------------------------- 0.55s 2025-09-20 09:32:51.433157 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-09-20 09:32:51.433165 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-09-20 09:32:51.433172 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.51s 2025-09-20 09:33:13.984938 | orchestrator | 2025-09-20 09:33:13 | INFO  | Task 4b702700-95f8-453a-b380-ca44f5481008 (sync inventory) is running in background. Output coming soon. 2025-09-20 09:33:38.766325 | orchestrator | 2025-09-20 09:33:15 | INFO  | Starting group_vars file reorganization 2025-09-20 09:33:38.766496 | orchestrator | 2025-09-20 09:33:15 | INFO  | Moved 0 file(s) to their respective directories 2025-09-20 09:33:38.766514 | orchestrator | 2025-09-20 09:33:15 | INFO  | Group_vars file reorganization completed 2025-09-20 09:33:38.766526 | orchestrator | 2025-09-20 09:33:17 | INFO  | Starting variable preparation from inventory 2025-09-20 09:33:38.766538 | orchestrator | 2025-09-20 09:33:20 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-20 09:33:38.766549 | orchestrator | 2025-09-20 09:33:20 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-20 09:33:38.766560 | orchestrator | 2025-09-20 09:33:20 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-20 09:33:38.766593 | orchestrator | 2025-09-20 09:33:20 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-20 09:33:38.766605 | orchestrator | 2025-09-20 09:33:20 | INFO  | Variable preparation completed 2025-09-20 09:33:38.766616 | orchestrator | 2025-09-20 09:33:22 | INFO  | Starting inventory overwrite handling 2025-09-20 09:33:38.766627 | orchestrator | 2025-09-20 09:33:22 | INFO  | Handling group overwrites in 99-overwrite 2025-09-20 09:33:38.766643 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removing group frr:children from 60-generic 2025-09-20 09:33:38.766654 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removing group storage:children from 50-kolla 2025-09-20 09:33:38.766665 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-20 09:33:38.766675 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-20 09:33:38.766686 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-20 09:33:38.766697 | orchestrator | 2025-09-20 09:33:22 | INFO  | Handling group overwrites in 20-roles 2025-09-20 09:33:38.766708 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-20 09:33:38.766744 | orchestrator | 2025-09-20 09:33:22 | INFO  | Removed 6 group(s) in total 2025-09-20 09:33:38.766756 | orchestrator | 2025-09-20 09:33:22 | INFO  | Inventory overwrite handling completed 2025-09-20 09:33:38.766767 | orchestrator | 2025-09-20 09:33:23 | INFO  | Starting merge of inventory files 2025-09-20 09:33:38.766777 | orchestrator | 2025-09-20 09:33:23 | INFO  | Inventory files merged successfully 2025-09-20 09:33:38.766788 | orchestrator | 2025-09-20 09:33:28 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-20 09:33:38.766799 | orchestrator | 2025-09-20 09:33:37 | INFO  | Successfully wrote ClusterShell configuration 2025-09-20 09:33:38.766810 | orchestrator | [master 0270283] 2025-09-20-09-33 2025-09-20 09:33:38.766821 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-20 09:33:40.900043 | orchestrator | 2025-09-20 09:33:40 | INFO  | Task fa218bd6-4043-4d04-84ac-41e36c62c12d (ceph-create-lvm-devices) was prepared for execution. 2025-09-20 09:33:40.900138 | orchestrator | 2025-09-20 09:33:40 | INFO  | It takes a moment until task fa218bd6-4043-4d04-84ac-41e36c62c12d (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-20 09:33:52.517108 | orchestrator | 2025-09-20 09:33:52.517200 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-20 09:33:52.517209 | orchestrator | 2025-09-20 09:33:52.517216 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 09:33:52.517223 | orchestrator | Saturday 20 September 2025 09:33:44 +0000 (0:00:00.287) 0:00:00.287 **** 2025-09-20 09:33:52.517231 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 09:33:52.517237 | orchestrator | 2025-09-20 09:33:52.517244 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 09:33:52.517250 | orchestrator | Saturday 20 September 2025 09:33:45 +0000 (0:00:00.231) 0:00:00.519 **** 2025-09-20 09:33:52.517257 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:33:52.517264 | orchestrator | 2025-09-20 09:33:52.517270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517277 | orchestrator | Saturday 20 September 2025 09:33:45 +0000 (0:00:00.204) 0:00:00.723 **** 2025-09-20 09:33:52.517283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-20 09:33:52.517291 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-20 09:33:52.517297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-20 09:33:52.517304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-20 09:33:52.517310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-20 09:33:52.517316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-20 09:33:52.517323 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-20 09:33:52.517329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-20 09:33:52.517335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-20 09:33:52.517342 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-20 09:33:52.517348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-20 09:33:52.517354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-20 09:33:52.517388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-20 09:33:52.517394 | orchestrator | 2025-09-20 09:33:52.517401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517425 | orchestrator | Saturday 20 September 2025 09:33:45 +0000 (0:00:00.382) 0:00:01.105 **** 2025-09-20 09:33:52.517432 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517439 | orchestrator | 2025-09-20 09:33:52.517445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517451 | orchestrator | Saturday 20 September 2025 09:33:46 +0000 (0:00:00.403) 0:00:01.508 **** 2025-09-20 09:33:52.517457 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517463 | orchestrator | 2025-09-20 09:33:52.517469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517476 | orchestrator | Saturday 20 September 2025 09:33:46 +0000 (0:00:00.215) 0:00:01.724 **** 2025-09-20 09:33:52.517482 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517488 | orchestrator | 2025-09-20 09:33:52.517495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517501 | orchestrator | Saturday 20 September 2025 09:33:46 +0000 (0:00:00.188) 0:00:01.912 **** 2025-09-20 09:33:52.517507 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517513 | orchestrator | 2025-09-20 09:33:52.517520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517526 | orchestrator | Saturday 20 September 2025 09:33:46 +0000 (0:00:00.191) 0:00:02.104 **** 2025-09-20 09:33:52.517532 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517538 | orchestrator | 2025-09-20 09:33:52.517545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517551 | orchestrator | Saturday 20 September 2025 09:33:46 +0000 (0:00:00.198) 0:00:02.302 **** 2025-09-20 09:33:52.517557 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517563 | orchestrator | 2025-09-20 09:33:52.517569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517576 | orchestrator | Saturday 20 September 2025 09:33:47 +0000 (0:00:00.234) 0:00:02.537 **** 2025-09-20 09:33:52.517582 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517588 | orchestrator | 2025-09-20 09:33:52.517594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517600 | orchestrator | Saturday 20 September 2025 09:33:47 +0000 (0:00:00.205) 0:00:02.743 **** 2025-09-20 09:33:52.517607 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517613 | orchestrator | 2025-09-20 09:33:52.517619 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517625 | orchestrator | Saturday 20 September 2025 09:33:47 +0000 (0:00:00.208) 0:00:02.952 **** 2025-09-20 09:33:52.517632 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590) 2025-09-20 09:33:52.517639 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590) 2025-09-20 09:33:52.517646 | orchestrator | 2025-09-20 09:33:52.517652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517658 | orchestrator | Saturday 20 September 2025 09:33:47 +0000 (0:00:00.424) 0:00:03.377 **** 2025-09-20 09:33:52.517676 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9) 2025-09-20 09:33:52.517684 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9) 2025-09-20 09:33:52.517691 | orchestrator | 2025-09-20 09:33:52.517698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517705 | orchestrator | Saturday 20 September 2025 09:33:48 +0000 (0:00:00.424) 0:00:03.801 **** 2025-09-20 09:33:52.517712 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650) 2025-09-20 09:33:52.517719 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650) 2025-09-20 09:33:52.517726 | orchestrator | 2025-09-20 09:33:52.517733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517745 | orchestrator | Saturday 20 September 2025 09:33:49 +0000 (0:00:00.649) 0:00:04.450 **** 2025-09-20 09:33:52.517752 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b) 2025-09-20 09:33:52.517759 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b) 2025-09-20 09:33:52.517766 | orchestrator | 2025-09-20 09:33:52.517773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:33:52.517781 | orchestrator | Saturday 20 September 2025 09:33:49 +0000 (0:00:00.893) 0:00:05.344 **** 2025-09-20 09:33:52.517788 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 09:33:52.517795 | orchestrator | 2025-09-20 09:33:52.517802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.517809 | orchestrator | Saturday 20 September 2025 09:33:50 +0000 (0:00:00.352) 0:00:05.696 **** 2025-09-20 09:33:52.517815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-20 09:33:52.517822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-20 09:33:52.517829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-20 09:33:52.517836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-20 09:33:52.517855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-20 09:33:52.517863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-20 09:33:52.517870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-20 09:33:52.517876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-20 09:33:52.517883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-20 09:33:52.517890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-20 09:33:52.517897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-20 09:33:52.517904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-20 09:33:52.517914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-20 09:33:52.517922 | orchestrator | 2025-09-20 09:33:52.517929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.517936 | orchestrator | Saturday 20 September 2025 09:33:50 +0000 (0:00:00.432) 0:00:06.128 **** 2025-09-20 09:33:52.517943 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517950 | orchestrator | 2025-09-20 09:33:52.517957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.517964 | orchestrator | Saturday 20 September 2025 09:33:50 +0000 (0:00:00.226) 0:00:06.355 **** 2025-09-20 09:33:52.517971 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.517978 | orchestrator | 2025-09-20 09:33:52.517985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.517992 | orchestrator | Saturday 20 September 2025 09:33:51 +0000 (0:00:00.212) 0:00:06.567 **** 2025-09-20 09:33:52.517999 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.518006 | orchestrator | 2025-09-20 09:33:52.518013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.518056 | orchestrator | Saturday 20 September 2025 09:33:51 +0000 (0:00:00.246) 0:00:06.814 **** 2025-09-20 09:33:52.518063 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.518069 | orchestrator | 2025-09-20 09:33:52.518075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.518086 | orchestrator | Saturday 20 September 2025 09:33:51 +0000 (0:00:00.222) 0:00:07.036 **** 2025-09-20 09:33:52.518093 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.518099 | orchestrator | 2025-09-20 09:33:52.518105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.518111 | orchestrator | Saturday 20 September 2025 09:33:51 +0000 (0:00:00.208) 0:00:07.245 **** 2025-09-20 09:33:52.518117 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.518123 | orchestrator | 2025-09-20 09:33:52.518129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.518135 | orchestrator | Saturday 20 September 2025 09:33:52 +0000 (0:00:00.207) 0:00:07.453 **** 2025-09-20 09:33:52.518142 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:33:52.518148 | orchestrator | 2025-09-20 09:33:52.518154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:33:52.518160 | orchestrator | Saturday 20 September 2025 09:33:52 +0000 (0:00:00.252) 0:00:07.705 **** 2025-09-20 09:33:52.518170 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367000 | orchestrator | 2025-09-20 09:34:01.367112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:01.367129 | orchestrator | Saturday 20 September 2025 09:33:52 +0000 (0:00:00.216) 0:00:07.921 **** 2025-09-20 09:34:01.367140 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-20 09:34:01.367152 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-20 09:34:01.367164 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-20 09:34:01.367174 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-20 09:34:01.367185 | orchestrator | 2025-09-20 09:34:01.367196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:01.367207 | orchestrator | Saturday 20 September 2025 09:33:53 +0000 (0:00:01.188) 0:00:09.110 **** 2025-09-20 09:34:01.367218 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367229 | orchestrator | 2025-09-20 09:34:01.367239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:01.367250 | orchestrator | Saturday 20 September 2025 09:33:53 +0000 (0:00:00.255) 0:00:09.365 **** 2025-09-20 09:34:01.367261 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367272 | orchestrator | 2025-09-20 09:34:01.367282 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:01.367293 | orchestrator | Saturday 20 September 2025 09:33:54 +0000 (0:00:00.271) 0:00:09.637 **** 2025-09-20 09:34:01.367303 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367314 | orchestrator | 2025-09-20 09:34:01.367325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:01.367389 | orchestrator | Saturday 20 September 2025 09:33:54 +0000 (0:00:00.230) 0:00:09.867 **** 2025-09-20 09:34:01.367401 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367411 | orchestrator | 2025-09-20 09:34:01.367422 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-20 09:34:01.367433 | orchestrator | Saturday 20 September 2025 09:33:54 +0000 (0:00:00.218) 0:00:10.086 **** 2025-09-20 09:34:01.367444 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367454 | orchestrator | 2025-09-20 09:34:01.367465 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-20 09:34:01.367476 | orchestrator | Saturday 20 September 2025 09:33:54 +0000 (0:00:00.156) 0:00:10.242 **** 2025-09-20 09:34:01.367487 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}}) 2025-09-20 09:34:01.367499 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f5012b99-8722-5cc3-9d11-b95ce6d4943a'}}) 2025-09-20 09:34:01.367510 | orchestrator | 2025-09-20 09:34:01.367521 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-20 09:34:01.367532 | orchestrator | Saturday 20 September 2025 09:33:55 +0000 (0:00:00.235) 0:00:10.478 **** 2025-09-20 09:34:01.367547 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}) 2025-09-20 09:34:01.367583 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'}) 2025-09-20 09:34:01.367597 | orchestrator | 2025-09-20 09:34:01.367609 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-20 09:34:01.367623 | orchestrator | Saturday 20 September 2025 09:33:57 +0000 (0:00:02.013) 0:00:12.492 **** 2025-09-20 09:34:01.367635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.367649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.367662 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367674 | orchestrator | 2025-09-20 09:34:01.367688 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-20 09:34:01.367733 | orchestrator | Saturday 20 September 2025 09:33:57 +0000 (0:00:00.185) 0:00:12.678 **** 2025-09-20 09:34:01.367747 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}) 2025-09-20 09:34:01.367762 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'}) 2025-09-20 09:34:01.367780 | orchestrator | 2025-09-20 09:34:01.367793 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-20 09:34:01.367806 | orchestrator | Saturday 20 September 2025 09:33:58 +0000 (0:00:01.553) 0:00:14.231 **** 2025-09-20 09:34:01.367819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.367832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.367845 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367858 | orchestrator | 2025-09-20 09:34:01.367870 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-20 09:34:01.367885 | orchestrator | Saturday 20 September 2025 09:33:59 +0000 (0:00:00.202) 0:00:14.434 **** 2025-09-20 09:34:01.367898 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367910 | orchestrator | 2025-09-20 09:34:01.367921 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-20 09:34:01.367949 | orchestrator | Saturday 20 September 2025 09:33:59 +0000 (0:00:00.180) 0:00:14.614 **** 2025-09-20 09:34:01.367961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.367973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.367984 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.367994 | orchestrator | 2025-09-20 09:34:01.368005 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-20 09:34:01.368016 | orchestrator | Saturday 20 September 2025 09:33:59 +0000 (0:00:00.395) 0:00:15.010 **** 2025-09-20 09:34:01.368026 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368037 | orchestrator | 2025-09-20 09:34:01.368047 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-20 09:34:01.368058 | orchestrator | Saturday 20 September 2025 09:33:59 +0000 (0:00:00.184) 0:00:15.195 **** 2025-09-20 09:34:01.368069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.368089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.368100 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368111 | orchestrator | 2025-09-20 09:34:01.368121 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-20 09:34:01.368132 | orchestrator | Saturday 20 September 2025 09:33:59 +0000 (0:00:00.220) 0:00:15.415 **** 2025-09-20 09:34:01.368143 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368153 | orchestrator | 2025-09-20 09:34:01.368164 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-20 09:34:01.368174 | orchestrator | Saturday 20 September 2025 09:34:00 +0000 (0:00:00.168) 0:00:15.584 **** 2025-09-20 09:34:01.368185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.368196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.368206 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368217 | orchestrator | 2025-09-20 09:34:01.368228 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-20 09:34:01.368239 | orchestrator | Saturday 20 September 2025 09:34:00 +0000 (0:00:00.190) 0:00:15.775 **** 2025-09-20 09:34:01.368249 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:01.368260 | orchestrator | 2025-09-20 09:34:01.368271 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-20 09:34:01.368281 | orchestrator | Saturday 20 September 2025 09:34:00 +0000 (0:00:00.158) 0:00:15.933 **** 2025-09-20 09:34:01.368315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.368327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.368358 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368376 | orchestrator | 2025-09-20 09:34:01.368394 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-20 09:34:01.368422 | orchestrator | Saturday 20 September 2025 09:34:00 +0000 (0:00:00.271) 0:00:16.205 **** 2025-09-20 09:34:01.368446 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.368463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.368481 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368501 | orchestrator | 2025-09-20 09:34:01.368521 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-20 09:34:01.368538 | orchestrator | Saturday 20 September 2025 09:34:00 +0000 (0:00:00.170) 0:00:16.375 **** 2025-09-20 09:34:01.368553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:01.368563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:01.368574 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368585 | orchestrator | 2025-09-20 09:34:01.368596 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-20 09:34:01.368606 | orchestrator | Saturday 20 September 2025 09:34:01 +0000 (0:00:00.134) 0:00:16.510 **** 2025-09-20 09:34:01.368617 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368638 | orchestrator | 2025-09-20 09:34:01.368649 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-20 09:34:01.368659 | orchestrator | Saturday 20 September 2025 09:34:01 +0000 (0:00:00.128) 0:00:16.638 **** 2025-09-20 09:34:01.368670 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:01.368680 | orchestrator | 2025-09-20 09:34:01.368700 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-20 09:34:08.274875 | orchestrator | Saturday 20 September 2025 09:34:01 +0000 (0:00:00.145) 0:00:16.783 **** 2025-09-20 09:34:08.274932 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.274945 | orchestrator | 2025-09-20 09:34:08.274955 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-20 09:34:08.274966 | orchestrator | Saturday 20 September 2025 09:34:01 +0000 (0:00:00.141) 0:00:16.925 **** 2025-09-20 09:34:08.274976 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:34:08.274986 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-20 09:34:08.274996 | orchestrator | } 2025-09-20 09:34:08.275006 | orchestrator | 2025-09-20 09:34:08.275016 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-20 09:34:08.275025 | orchestrator | Saturday 20 September 2025 09:34:01 +0000 (0:00:00.300) 0:00:17.225 **** 2025-09-20 09:34:08.275035 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:34:08.275045 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-20 09:34:08.275054 | orchestrator | } 2025-09-20 09:34:08.275064 | orchestrator | 2025-09-20 09:34:08.275074 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-20 09:34:08.275083 | orchestrator | Saturday 20 September 2025 09:34:01 +0000 (0:00:00.161) 0:00:17.386 **** 2025-09-20 09:34:08.275092 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:34:08.275102 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-20 09:34:08.275112 | orchestrator | } 2025-09-20 09:34:08.275122 | orchestrator | 2025-09-20 09:34:08.275131 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-20 09:34:08.275141 | orchestrator | Saturday 20 September 2025 09:34:02 +0000 (0:00:00.169) 0:00:17.556 **** 2025-09-20 09:34:08.275151 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:08.275160 | orchestrator | 2025-09-20 09:34:08.275170 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-20 09:34:08.275179 | orchestrator | Saturday 20 September 2025 09:34:02 +0000 (0:00:00.642) 0:00:18.199 **** 2025-09-20 09:34:08.275189 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:08.275198 | orchestrator | 2025-09-20 09:34:08.275208 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-20 09:34:08.275217 | orchestrator | Saturday 20 September 2025 09:34:03 +0000 (0:00:00.501) 0:00:18.700 **** 2025-09-20 09:34:08.275227 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:08.275236 | orchestrator | 2025-09-20 09:34:08.275246 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-20 09:34:08.275255 | orchestrator | Saturday 20 September 2025 09:34:03 +0000 (0:00:00.528) 0:00:19.229 **** 2025-09-20 09:34:08.275265 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:08.275274 | orchestrator | 2025-09-20 09:34:08.275284 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-20 09:34:08.275293 | orchestrator | Saturday 20 September 2025 09:34:03 +0000 (0:00:00.186) 0:00:19.415 **** 2025-09-20 09:34:08.275303 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275338 | orchestrator | 2025-09-20 09:34:08.275349 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-20 09:34:08.275358 | orchestrator | Saturday 20 September 2025 09:34:04 +0000 (0:00:00.107) 0:00:19.523 **** 2025-09-20 09:34:08.275368 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275378 | orchestrator | 2025-09-20 09:34:08.275388 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-20 09:34:08.275397 | orchestrator | Saturday 20 September 2025 09:34:04 +0000 (0:00:00.108) 0:00:19.631 **** 2025-09-20 09:34:08.275407 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:34:08.275466 | orchestrator |  "vgs_report": { 2025-09-20 09:34:08.275485 | orchestrator |  "vg": [] 2025-09-20 09:34:08.275497 | orchestrator |  } 2025-09-20 09:34:08.275509 | orchestrator | } 2025-09-20 09:34:08.275520 | orchestrator | 2025-09-20 09:34:08.275532 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-20 09:34:08.275543 | orchestrator | Saturday 20 September 2025 09:34:04 +0000 (0:00:00.188) 0:00:19.820 **** 2025-09-20 09:34:08.275554 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275565 | orchestrator | 2025-09-20 09:34:08.275576 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-20 09:34:08.275587 | orchestrator | Saturday 20 September 2025 09:34:04 +0000 (0:00:00.135) 0:00:19.955 **** 2025-09-20 09:34:08.275597 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275608 | orchestrator | 2025-09-20 09:34:08.275619 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-20 09:34:08.275631 | orchestrator | Saturday 20 September 2025 09:34:04 +0000 (0:00:00.143) 0:00:20.099 **** 2025-09-20 09:34:08.275641 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275652 | orchestrator | 2025-09-20 09:34:08.275664 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-20 09:34:08.275674 | orchestrator | Saturday 20 September 2025 09:34:05 +0000 (0:00:00.425) 0:00:20.524 **** 2025-09-20 09:34:08.275685 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275696 | orchestrator | 2025-09-20 09:34:08.275707 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-20 09:34:08.275718 | orchestrator | Saturday 20 September 2025 09:34:05 +0000 (0:00:00.184) 0:00:20.709 **** 2025-09-20 09:34:08.275729 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275740 | orchestrator | 2025-09-20 09:34:08.275752 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-20 09:34:08.275762 | orchestrator | Saturday 20 September 2025 09:34:05 +0000 (0:00:00.150) 0:00:20.859 **** 2025-09-20 09:34:08.275773 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275784 | orchestrator | 2025-09-20 09:34:08.275795 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-20 09:34:08.275806 | orchestrator | Saturday 20 September 2025 09:34:05 +0000 (0:00:00.164) 0:00:21.024 **** 2025-09-20 09:34:08.275818 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275828 | orchestrator | 2025-09-20 09:34:08.275838 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-20 09:34:08.275847 | orchestrator | Saturday 20 September 2025 09:34:05 +0000 (0:00:00.274) 0:00:21.299 **** 2025-09-20 09:34:08.275857 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275866 | orchestrator | 2025-09-20 09:34:08.275876 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-20 09:34:08.275896 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.208) 0:00:21.507 **** 2025-09-20 09:34:08.275906 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275915 | orchestrator | 2025-09-20 09:34:08.275925 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-20 09:34:08.275934 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.144) 0:00:21.652 **** 2025-09-20 09:34:08.275944 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275953 | orchestrator | 2025-09-20 09:34:08.275963 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-20 09:34:08.275972 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.149) 0:00:21.801 **** 2025-09-20 09:34:08.275982 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.275991 | orchestrator | 2025-09-20 09:34:08.276001 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-20 09:34:08.276010 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.151) 0:00:21.953 **** 2025-09-20 09:34:08.276019 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276029 | orchestrator | 2025-09-20 09:34:08.276044 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-20 09:34:08.276053 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.151) 0:00:22.104 **** 2025-09-20 09:34:08.276063 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276072 | orchestrator | 2025-09-20 09:34:08.276082 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-20 09:34:08.276091 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.140) 0:00:22.245 **** 2025-09-20 09:34:08.276101 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276110 | orchestrator | 2025-09-20 09:34:08.276120 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-20 09:34:08.276129 | orchestrator | Saturday 20 September 2025 09:34:06 +0000 (0:00:00.156) 0:00:22.402 **** 2025-09-20 09:34:08.276140 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:08.276151 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:08.276161 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276170 | orchestrator | 2025-09-20 09:34:08.276180 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-20 09:34:08.276189 | orchestrator | Saturday 20 September 2025 09:34:07 +0000 (0:00:00.406) 0:00:22.809 **** 2025-09-20 09:34:08.276199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:08.276208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:08.276218 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276228 | orchestrator | 2025-09-20 09:34:08.276237 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-20 09:34:08.276247 | orchestrator | Saturday 20 September 2025 09:34:07 +0000 (0:00:00.198) 0:00:23.007 **** 2025-09-20 09:34:08.276256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:08.276266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:08.276275 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276285 | orchestrator | 2025-09-20 09:34:08.276294 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-20 09:34:08.276304 | orchestrator | Saturday 20 September 2025 09:34:07 +0000 (0:00:00.165) 0:00:23.173 **** 2025-09-20 09:34:08.276329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:08.276339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:08.276348 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276358 | orchestrator | 2025-09-20 09:34:08.276367 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-20 09:34:08.276377 | orchestrator | Saturday 20 September 2025 09:34:07 +0000 (0:00:00.170) 0:00:23.343 **** 2025-09-20 09:34:08.276387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:08.276396 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:08.276406 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:08.276421 | orchestrator | 2025-09-20 09:34:08.276430 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-20 09:34:08.276440 | orchestrator | Saturday 20 September 2025 09:34:08 +0000 (0:00:00.166) 0:00:23.510 **** 2025-09-20 09:34:08.276456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:08.276471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:14.162114 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:14.162179 | orchestrator | 2025-09-20 09:34:14.162190 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-20 09:34:14.162202 | orchestrator | Saturday 20 September 2025 09:34:08 +0000 (0:00:00.185) 0:00:23.696 **** 2025-09-20 09:34:14.162212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:14.162224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:14.162233 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:14.162243 | orchestrator | 2025-09-20 09:34:14.162253 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-20 09:34:14.162262 | orchestrator | Saturday 20 September 2025 09:34:08 +0000 (0:00:00.169) 0:00:23.866 **** 2025-09-20 09:34:14.162272 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:14.162281 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:14.162291 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:14.162320 | orchestrator | 2025-09-20 09:34:14.162331 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-20 09:34:14.162340 | orchestrator | Saturday 20 September 2025 09:34:08 +0000 (0:00:00.168) 0:00:24.034 **** 2025-09-20 09:34:14.162350 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:14.162360 | orchestrator | 2025-09-20 09:34:14.162370 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-20 09:34:14.162379 | orchestrator | Saturday 20 September 2025 09:34:09 +0000 (0:00:00.540) 0:00:24.574 **** 2025-09-20 09:34:14.162389 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:14.162398 | orchestrator | 2025-09-20 09:34:14.162408 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-20 09:34:14.162417 | orchestrator | Saturday 20 September 2025 09:34:09 +0000 (0:00:00.506) 0:00:25.081 **** 2025-09-20 09:34:14.162427 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:34:14.162436 | orchestrator | 2025-09-20 09:34:14.162446 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-20 09:34:14.162455 | orchestrator | Saturday 20 September 2025 09:34:09 +0000 (0:00:00.194) 0:00:25.275 **** 2025-09-20 09:34:14.162464 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'vg_name': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}) 2025-09-20 09:34:14.162475 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'vg_name': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'}) 2025-09-20 09:34:14.162485 | orchestrator | 2025-09-20 09:34:14.162505 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-20 09:34:14.162515 | orchestrator | Saturday 20 September 2025 09:34:10 +0000 (0:00:00.217) 0:00:25.493 **** 2025-09-20 09:34:14.162524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:14.162548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:14.162559 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:14.162568 | orchestrator | 2025-09-20 09:34:14.162578 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-20 09:34:14.162587 | orchestrator | Saturday 20 September 2025 09:34:10 +0000 (0:00:00.387) 0:00:25.881 **** 2025-09-20 09:34:14.162597 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:14.162606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:14.162616 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:14.162625 | orchestrator | 2025-09-20 09:34:14.162635 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-20 09:34:14.162644 | orchestrator | Saturday 20 September 2025 09:34:10 +0000 (0:00:00.177) 0:00:26.058 **** 2025-09-20 09:34:14.162654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'})  2025-09-20 09:34:14.162664 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'})  2025-09-20 09:34:14.162673 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:34:14.162683 | orchestrator | 2025-09-20 09:34:14.162692 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-20 09:34:14.162702 | orchestrator | Saturday 20 September 2025 09:34:10 +0000 (0:00:00.187) 0:00:26.246 **** 2025-09-20 09:34:14.162711 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:34:14.162721 | orchestrator |  "lvm_report": { 2025-09-20 09:34:14.162731 | orchestrator |  "lv": [ 2025-09-20 09:34:14.162741 | orchestrator |  { 2025-09-20 09:34:14.162761 | orchestrator |  "lv_name": "osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22", 2025-09-20 09:34:14.162773 | orchestrator |  "vg_name": "ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22" 2025-09-20 09:34:14.162782 | orchestrator |  }, 2025-09-20 09:34:14.162792 | orchestrator |  { 2025-09-20 09:34:14.162801 | orchestrator |  "lv_name": "osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a", 2025-09-20 09:34:14.162811 | orchestrator |  "vg_name": "ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a" 2025-09-20 09:34:14.162820 | orchestrator |  } 2025-09-20 09:34:14.162830 | orchestrator |  ], 2025-09-20 09:34:14.162839 | orchestrator |  "pv": [ 2025-09-20 09:34:14.162848 | orchestrator |  { 2025-09-20 09:34:14.162858 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-20 09:34:14.162867 | orchestrator |  "vg_name": "ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22" 2025-09-20 09:34:14.162877 | orchestrator |  }, 2025-09-20 09:34:14.162886 | orchestrator |  { 2025-09-20 09:34:14.162896 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-20 09:34:14.162905 | orchestrator |  "vg_name": "ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a" 2025-09-20 09:34:14.162915 | orchestrator |  } 2025-09-20 09:34:14.162924 | orchestrator |  ] 2025-09-20 09:34:14.162934 | orchestrator |  } 2025-09-20 09:34:14.162943 | orchestrator | } 2025-09-20 09:34:14.162953 | orchestrator | 2025-09-20 09:34:14.162963 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-20 09:34:14.162972 | orchestrator | 2025-09-20 09:34:14.162982 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 09:34:14.162992 | orchestrator | Saturday 20 September 2025 09:34:11 +0000 (0:00:00.316) 0:00:26.562 **** 2025-09-20 09:34:14.163001 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-20 09:34:14.163017 | orchestrator | 2025-09-20 09:34:14.163026 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 09:34:14.163036 | orchestrator | Saturday 20 September 2025 09:34:11 +0000 (0:00:00.280) 0:00:26.843 **** 2025-09-20 09:34:14.163045 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:14.163055 | orchestrator | 2025-09-20 09:34:14.163064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163074 | orchestrator | Saturday 20 September 2025 09:34:11 +0000 (0:00:00.243) 0:00:27.087 **** 2025-09-20 09:34:14.163083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-20 09:34:14.163093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-20 09:34:14.163102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-20 09:34:14.163111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-20 09:34:14.163121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-20 09:34:14.163130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-20 09:34:14.163140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-20 09:34:14.163153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-20 09:34:14.163163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-20 09:34:14.163173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-20 09:34:14.163182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-20 09:34:14.163192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-20 09:34:14.163201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-20 09:34:14.163210 | orchestrator | 2025-09-20 09:34:14.163220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163229 | orchestrator | Saturday 20 September 2025 09:34:12 +0000 (0:00:00.431) 0:00:27.518 **** 2025-09-20 09:34:14.163239 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163248 | orchestrator | 2025-09-20 09:34:14.163258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163267 | orchestrator | Saturday 20 September 2025 09:34:12 +0000 (0:00:00.223) 0:00:27.742 **** 2025-09-20 09:34:14.163277 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163286 | orchestrator | 2025-09-20 09:34:14.163295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163321 | orchestrator | Saturday 20 September 2025 09:34:12 +0000 (0:00:00.205) 0:00:27.947 **** 2025-09-20 09:34:14.163331 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163340 | orchestrator | 2025-09-20 09:34:14.163350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163359 | orchestrator | Saturday 20 September 2025 09:34:13 +0000 (0:00:00.823) 0:00:28.770 **** 2025-09-20 09:34:14.163369 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163378 | orchestrator | 2025-09-20 09:34:14.163388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163397 | orchestrator | Saturday 20 September 2025 09:34:13 +0000 (0:00:00.214) 0:00:28.985 **** 2025-09-20 09:34:14.163407 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163416 | orchestrator | 2025-09-20 09:34:14.163426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163435 | orchestrator | Saturday 20 September 2025 09:34:13 +0000 (0:00:00.188) 0:00:29.173 **** 2025-09-20 09:34:14.163445 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163454 | orchestrator | 2025-09-20 09:34:14.163470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:14.163480 | orchestrator | Saturday 20 September 2025 09:34:13 +0000 (0:00:00.197) 0:00:29.371 **** 2025-09-20 09:34:14.163489 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:14.163499 | orchestrator | 2025-09-20 09:34:14.163514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:24.893525 | orchestrator | Saturday 20 September 2025 09:34:14 +0000 (0:00:00.211) 0:00:29.582 **** 2025-09-20 09:34:24.893634 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.893650 | orchestrator | 2025-09-20 09:34:24.893663 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:24.893675 | orchestrator | Saturday 20 September 2025 09:34:14 +0000 (0:00:00.212) 0:00:29.795 **** 2025-09-20 09:34:24.893686 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b) 2025-09-20 09:34:24.893699 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b) 2025-09-20 09:34:24.893709 | orchestrator | 2025-09-20 09:34:24.893721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:24.893732 | orchestrator | Saturday 20 September 2025 09:34:14 +0000 (0:00:00.437) 0:00:30.232 **** 2025-09-20 09:34:24.893742 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a) 2025-09-20 09:34:24.893753 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a) 2025-09-20 09:34:24.893764 | orchestrator | 2025-09-20 09:34:24.893775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:24.893785 | orchestrator | Saturday 20 September 2025 09:34:15 +0000 (0:00:00.475) 0:00:30.707 **** 2025-09-20 09:34:24.893796 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9) 2025-09-20 09:34:24.893807 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9) 2025-09-20 09:34:24.893818 | orchestrator | 2025-09-20 09:34:24.893828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:24.893839 | orchestrator | Saturday 20 September 2025 09:34:15 +0000 (0:00:00.472) 0:00:31.180 **** 2025-09-20 09:34:24.893850 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26) 2025-09-20 09:34:24.893861 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26) 2025-09-20 09:34:24.893871 | orchestrator | 2025-09-20 09:34:24.893882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:24.893893 | orchestrator | Saturday 20 September 2025 09:34:16 +0000 (0:00:00.452) 0:00:31.633 **** 2025-09-20 09:34:24.893904 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 09:34:24.893915 | orchestrator | 2025-09-20 09:34:24.893926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.893936 | orchestrator | Saturday 20 September 2025 09:34:16 +0000 (0:00:00.359) 0:00:31.992 **** 2025-09-20 09:34:24.893947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-20 09:34:24.893959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-20 09:34:24.893969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-20 09:34:24.893980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-20 09:34:24.893991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-20 09:34:24.894002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-20 09:34:24.894083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-20 09:34:24.894122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-20 09:34:24.894136 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-20 09:34:24.894148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-20 09:34:24.894161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-20 09:34:24.894174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-20 09:34:24.894186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-20 09:34:24.894198 | orchestrator | 2025-09-20 09:34:24.894210 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894223 | orchestrator | Saturday 20 September 2025 09:34:17 +0000 (0:00:00.650) 0:00:32.643 **** 2025-09-20 09:34:24.894236 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894248 | orchestrator | 2025-09-20 09:34:24.894261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894273 | orchestrator | Saturday 20 September 2025 09:34:17 +0000 (0:00:00.225) 0:00:32.868 **** 2025-09-20 09:34:24.894285 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894323 | orchestrator | 2025-09-20 09:34:24.894336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894348 | orchestrator | Saturday 20 September 2025 09:34:17 +0000 (0:00:00.225) 0:00:33.093 **** 2025-09-20 09:34:24.894360 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894372 | orchestrator | 2025-09-20 09:34:24.894385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894397 | orchestrator | Saturday 20 September 2025 09:34:17 +0000 (0:00:00.230) 0:00:33.324 **** 2025-09-20 09:34:24.894408 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894419 | orchestrator | 2025-09-20 09:34:24.894446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894458 | orchestrator | Saturday 20 September 2025 09:34:18 +0000 (0:00:00.220) 0:00:33.545 **** 2025-09-20 09:34:24.894469 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894479 | orchestrator | 2025-09-20 09:34:24.894490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894501 | orchestrator | Saturday 20 September 2025 09:34:18 +0000 (0:00:00.232) 0:00:33.778 **** 2025-09-20 09:34:24.894511 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894522 | orchestrator | 2025-09-20 09:34:24.894533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894543 | orchestrator | Saturday 20 September 2025 09:34:18 +0000 (0:00:00.217) 0:00:33.996 **** 2025-09-20 09:34:24.894554 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894564 | orchestrator | 2025-09-20 09:34:24.894575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894586 | orchestrator | Saturday 20 September 2025 09:34:18 +0000 (0:00:00.206) 0:00:34.202 **** 2025-09-20 09:34:24.894596 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894607 | orchestrator | 2025-09-20 09:34:24.894618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894628 | orchestrator | Saturday 20 September 2025 09:34:18 +0000 (0:00:00.201) 0:00:34.404 **** 2025-09-20 09:34:24.894639 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-20 09:34:24.894650 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-20 09:34:24.894661 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-20 09:34:24.894671 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-20 09:34:24.894682 | orchestrator | 2025-09-20 09:34:24.894693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894704 | orchestrator | Saturday 20 September 2025 09:34:19 +0000 (0:00:00.875) 0:00:35.280 **** 2025-09-20 09:34:24.894723 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894734 | orchestrator | 2025-09-20 09:34:24.894744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894755 | orchestrator | Saturday 20 September 2025 09:34:20 +0000 (0:00:00.199) 0:00:35.479 **** 2025-09-20 09:34:24.894765 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894776 | orchestrator | 2025-09-20 09:34:24.894787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894797 | orchestrator | Saturday 20 September 2025 09:34:20 +0000 (0:00:00.206) 0:00:35.686 **** 2025-09-20 09:34:24.894808 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894818 | orchestrator | 2025-09-20 09:34:24.894829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:24.894839 | orchestrator | Saturday 20 September 2025 09:34:20 +0000 (0:00:00.673) 0:00:36.359 **** 2025-09-20 09:34:24.894850 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894860 | orchestrator | 2025-09-20 09:34:24.894871 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-20 09:34:24.894882 | orchestrator | Saturday 20 September 2025 09:34:21 +0000 (0:00:00.224) 0:00:36.583 **** 2025-09-20 09:34:24.894898 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.894909 | orchestrator | 2025-09-20 09:34:24.894920 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-20 09:34:24.894930 | orchestrator | Saturday 20 September 2025 09:34:21 +0000 (0:00:00.139) 0:00:36.723 **** 2025-09-20 09:34:24.894941 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}}) 2025-09-20 09:34:24.894952 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}}) 2025-09-20 09:34:24.894962 | orchestrator | 2025-09-20 09:34:24.894973 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-20 09:34:24.894984 | orchestrator | Saturday 20 September 2025 09:34:21 +0000 (0:00:00.226) 0:00:36.950 **** 2025-09-20 09:34:24.894996 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}) 2025-09-20 09:34:24.895008 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}) 2025-09-20 09:34:24.895018 | orchestrator | 2025-09-20 09:34:24.895029 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-20 09:34:24.895039 | orchestrator | Saturday 20 September 2025 09:34:23 +0000 (0:00:01.882) 0:00:38.832 **** 2025-09-20 09:34:24.895050 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:24.895062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:24.895073 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:24.895083 | orchestrator | 2025-09-20 09:34:24.895094 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-20 09:34:24.895105 | orchestrator | Saturday 20 September 2025 09:34:23 +0000 (0:00:00.180) 0:00:39.013 **** 2025-09-20 09:34:24.895115 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}) 2025-09-20 09:34:24.895126 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}) 2025-09-20 09:34:24.895137 | orchestrator | 2025-09-20 09:34:24.895154 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-20 09:34:30.735836 | orchestrator | Saturday 20 September 2025 09:34:24 +0000 (0:00:01.295) 0:00:40.308 **** 2025-09-20 09:34:30.735996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736031 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736048 | orchestrator | 2025-09-20 09:34:30.736066 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-20 09:34:30.736083 | orchestrator | Saturday 20 September 2025 09:34:25 +0000 (0:00:00.170) 0:00:40.479 **** 2025-09-20 09:34:30.736099 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736114 | orchestrator | 2025-09-20 09:34:30.736130 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-20 09:34:30.736146 | orchestrator | Saturday 20 September 2025 09:34:25 +0000 (0:00:00.154) 0:00:40.633 **** 2025-09-20 09:34:30.736162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736177 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736192 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736207 | orchestrator | 2025-09-20 09:34:30.736223 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-20 09:34:30.736239 | orchestrator | Saturday 20 September 2025 09:34:25 +0000 (0:00:00.180) 0:00:40.814 **** 2025-09-20 09:34:30.736254 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736269 | orchestrator | 2025-09-20 09:34:30.736320 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-20 09:34:30.736341 | orchestrator | Saturday 20 September 2025 09:34:25 +0000 (0:00:00.135) 0:00:40.949 **** 2025-09-20 09:34:30.736359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736395 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736412 | orchestrator | 2025-09-20 09:34:30.736423 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-20 09:34:30.736434 | orchestrator | Saturday 20 September 2025 09:34:25 +0000 (0:00:00.158) 0:00:41.108 **** 2025-09-20 09:34:30.736462 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736473 | orchestrator | 2025-09-20 09:34:30.736484 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-20 09:34:30.736495 | orchestrator | Saturday 20 September 2025 09:34:26 +0000 (0:00:00.365) 0:00:41.474 **** 2025-09-20 09:34:30.736506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736518 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736528 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736539 | orchestrator | 2025-09-20 09:34:30.736550 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-20 09:34:30.736561 | orchestrator | Saturday 20 September 2025 09:34:26 +0000 (0:00:00.177) 0:00:41.651 **** 2025-09-20 09:34:30.736572 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:30.736583 | orchestrator | 2025-09-20 09:34:30.736595 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-20 09:34:30.736606 | orchestrator | Saturday 20 September 2025 09:34:26 +0000 (0:00:00.152) 0:00:41.804 **** 2025-09-20 09:34:30.736632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736652 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736662 | orchestrator | 2025-09-20 09:34:30.736671 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-20 09:34:30.736681 | orchestrator | Saturday 20 September 2025 09:34:26 +0000 (0:00:00.169) 0:00:41.973 **** 2025-09-20 09:34:30.736690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736709 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736719 | orchestrator | 2025-09-20 09:34:30.736728 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-20 09:34:30.736738 | orchestrator | Saturday 20 September 2025 09:34:26 +0000 (0:00:00.155) 0:00:42.129 **** 2025-09-20 09:34:30.736767 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:30.736777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:30.736787 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736796 | orchestrator | 2025-09-20 09:34:30.736805 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-20 09:34:30.736815 | orchestrator | Saturday 20 September 2025 09:34:26 +0000 (0:00:00.167) 0:00:42.296 **** 2025-09-20 09:34:30.736824 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736834 | orchestrator | 2025-09-20 09:34:30.736843 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-20 09:34:30.736853 | orchestrator | Saturday 20 September 2025 09:34:27 +0000 (0:00:00.140) 0:00:42.437 **** 2025-09-20 09:34:30.736862 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736871 | orchestrator | 2025-09-20 09:34:30.736881 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-20 09:34:30.736890 | orchestrator | Saturday 20 September 2025 09:34:27 +0000 (0:00:00.165) 0:00:42.602 **** 2025-09-20 09:34:30.736900 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.736909 | orchestrator | 2025-09-20 09:34:30.736918 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-20 09:34:30.736928 | orchestrator | Saturday 20 September 2025 09:34:27 +0000 (0:00:00.148) 0:00:42.751 **** 2025-09-20 09:34:30.736937 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:34:30.736946 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-20 09:34:30.736956 | orchestrator | } 2025-09-20 09:34:30.736966 | orchestrator | 2025-09-20 09:34:30.736975 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-20 09:34:30.736985 | orchestrator | Saturday 20 September 2025 09:34:27 +0000 (0:00:00.167) 0:00:42.919 **** 2025-09-20 09:34:30.736994 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:34:30.737003 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-20 09:34:30.737013 | orchestrator | } 2025-09-20 09:34:30.737022 | orchestrator | 2025-09-20 09:34:30.737032 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-20 09:34:30.737041 | orchestrator | Saturday 20 September 2025 09:34:27 +0000 (0:00:00.152) 0:00:43.072 **** 2025-09-20 09:34:30.737050 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:34:30.737060 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-20 09:34:30.737075 | orchestrator | } 2025-09-20 09:34:30.737085 | orchestrator | 2025-09-20 09:34:30.737094 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-20 09:34:30.737104 | orchestrator | Saturday 20 September 2025 09:34:27 +0000 (0:00:00.148) 0:00:43.220 **** 2025-09-20 09:34:30.737113 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:30.737123 | orchestrator | 2025-09-20 09:34:30.737132 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-20 09:34:30.737141 | orchestrator | Saturday 20 September 2025 09:34:28 +0000 (0:00:00.741) 0:00:43.962 **** 2025-09-20 09:34:30.737151 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:30.737160 | orchestrator | 2025-09-20 09:34:30.737170 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-20 09:34:30.737179 | orchestrator | Saturday 20 September 2025 09:34:29 +0000 (0:00:00.544) 0:00:44.507 **** 2025-09-20 09:34:30.737189 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:30.737198 | orchestrator | 2025-09-20 09:34:30.737208 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-20 09:34:30.737217 | orchestrator | Saturday 20 September 2025 09:34:29 +0000 (0:00:00.526) 0:00:45.033 **** 2025-09-20 09:34:30.737227 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:30.737236 | orchestrator | 2025-09-20 09:34:30.737245 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-20 09:34:30.737255 | orchestrator | Saturday 20 September 2025 09:34:29 +0000 (0:00:00.163) 0:00:45.196 **** 2025-09-20 09:34:30.737264 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.737273 | orchestrator | 2025-09-20 09:34:30.737283 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-20 09:34:30.737312 | orchestrator | Saturday 20 September 2025 09:34:29 +0000 (0:00:00.114) 0:00:45.311 **** 2025-09-20 09:34:30.737329 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.737339 | orchestrator | 2025-09-20 09:34:30.737349 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-20 09:34:30.737358 | orchestrator | Saturday 20 September 2025 09:34:29 +0000 (0:00:00.116) 0:00:45.427 **** 2025-09-20 09:34:30.737368 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:34:30.737378 | orchestrator |  "vgs_report": { 2025-09-20 09:34:30.737388 | orchestrator |  "vg": [] 2025-09-20 09:34:30.737398 | orchestrator |  } 2025-09-20 09:34:30.737407 | orchestrator | } 2025-09-20 09:34:30.737417 | orchestrator | 2025-09-20 09:34:30.737426 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-20 09:34:30.737436 | orchestrator | Saturday 20 September 2025 09:34:30 +0000 (0:00:00.148) 0:00:45.576 **** 2025-09-20 09:34:30.737445 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.737455 | orchestrator | 2025-09-20 09:34:30.737464 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-20 09:34:30.737474 | orchestrator | Saturday 20 September 2025 09:34:30 +0000 (0:00:00.141) 0:00:45.717 **** 2025-09-20 09:34:30.737483 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.737493 | orchestrator | 2025-09-20 09:34:30.737502 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-20 09:34:30.737512 | orchestrator | Saturday 20 September 2025 09:34:30 +0000 (0:00:00.138) 0:00:45.856 **** 2025-09-20 09:34:30.737521 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.737531 | orchestrator | 2025-09-20 09:34:30.737540 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-20 09:34:30.737549 | orchestrator | Saturday 20 September 2025 09:34:30 +0000 (0:00:00.144) 0:00:46.000 **** 2025-09-20 09:34:30.737559 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:30.737568 | orchestrator | 2025-09-20 09:34:30.737578 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-20 09:34:30.737594 | orchestrator | Saturday 20 September 2025 09:34:30 +0000 (0:00:00.150) 0:00:46.150 **** 2025-09-20 09:34:35.860328 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860403 | orchestrator | 2025-09-20 09:34:35.860428 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-20 09:34:35.860437 | orchestrator | Saturday 20 September 2025 09:34:30 +0000 (0:00:00.138) 0:00:46.288 **** 2025-09-20 09:34:35.860443 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860449 | orchestrator | 2025-09-20 09:34:35.860455 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-20 09:34:35.860462 | orchestrator | Saturday 20 September 2025 09:34:31 +0000 (0:00:00.374) 0:00:46.663 **** 2025-09-20 09:34:35.860468 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860474 | orchestrator | 2025-09-20 09:34:35.860480 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-20 09:34:35.860486 | orchestrator | Saturday 20 September 2025 09:34:31 +0000 (0:00:00.200) 0:00:46.863 **** 2025-09-20 09:34:35.860492 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860498 | orchestrator | 2025-09-20 09:34:35.860504 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-20 09:34:35.860511 | orchestrator | Saturday 20 September 2025 09:34:31 +0000 (0:00:00.167) 0:00:47.031 **** 2025-09-20 09:34:35.860517 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860523 | orchestrator | 2025-09-20 09:34:35.860529 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-20 09:34:35.860535 | orchestrator | Saturday 20 September 2025 09:34:31 +0000 (0:00:00.155) 0:00:47.186 **** 2025-09-20 09:34:35.860541 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860547 | orchestrator | 2025-09-20 09:34:35.860553 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-20 09:34:35.860559 | orchestrator | Saturday 20 September 2025 09:34:31 +0000 (0:00:00.147) 0:00:47.334 **** 2025-09-20 09:34:35.860565 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860571 | orchestrator | 2025-09-20 09:34:35.860577 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-20 09:34:35.860583 | orchestrator | Saturday 20 September 2025 09:34:32 +0000 (0:00:00.150) 0:00:47.484 **** 2025-09-20 09:34:35.860589 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860595 | orchestrator | 2025-09-20 09:34:35.860601 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-20 09:34:35.860607 | orchestrator | Saturday 20 September 2025 09:34:32 +0000 (0:00:00.159) 0:00:47.643 **** 2025-09-20 09:34:35.860613 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860619 | orchestrator | 2025-09-20 09:34:35.860625 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-20 09:34:35.860631 | orchestrator | Saturday 20 September 2025 09:34:32 +0000 (0:00:00.146) 0:00:47.790 **** 2025-09-20 09:34:35.860637 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860643 | orchestrator | 2025-09-20 09:34:35.860649 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-20 09:34:35.860655 | orchestrator | Saturday 20 September 2025 09:34:32 +0000 (0:00:00.153) 0:00:47.943 **** 2025-09-20 09:34:35.860674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860683 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860689 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860696 | orchestrator | 2025-09-20 09:34:35.860702 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-20 09:34:35.860708 | orchestrator | Saturday 20 September 2025 09:34:32 +0000 (0:00:00.156) 0:00:48.100 **** 2025-09-20 09:34:35.860714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860734 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860740 | orchestrator | 2025-09-20 09:34:35.860746 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-20 09:34:35.860752 | orchestrator | Saturday 20 September 2025 09:34:32 +0000 (0:00:00.174) 0:00:48.274 **** 2025-09-20 09:34:35.860758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860770 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860776 | orchestrator | 2025-09-20 09:34:35.860783 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-20 09:34:35.860789 | orchestrator | Saturday 20 September 2025 09:34:33 +0000 (0:00:00.178) 0:00:48.453 **** 2025-09-20 09:34:35.860795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860807 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860813 | orchestrator | 2025-09-20 09:34:35.860819 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-20 09:34:35.860837 | orchestrator | Saturday 20 September 2025 09:34:33 +0000 (0:00:00.374) 0:00:48.828 **** 2025-09-20 09:34:35.860844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860856 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860862 | orchestrator | 2025-09-20 09:34:35.860870 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-20 09:34:35.860877 | orchestrator | Saturday 20 September 2025 09:34:33 +0000 (0:00:00.202) 0:00:49.031 **** 2025-09-20 09:34:35.860885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860899 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860906 | orchestrator | 2025-09-20 09:34:35.860913 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-20 09:34:35.860919 | orchestrator | Saturday 20 September 2025 09:34:33 +0000 (0:00:00.171) 0:00:49.202 **** 2025-09-20 09:34:35.860927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860934 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860941 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860948 | orchestrator | 2025-09-20 09:34:35.860955 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-20 09:34:35.860962 | orchestrator | Saturday 20 September 2025 09:34:33 +0000 (0:00:00.169) 0:00:49.372 **** 2025-09-20 09:34:35.860969 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.860981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.860988 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.860995 | orchestrator | 2025-09-20 09:34:35.861005 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-20 09:34:35.861012 | orchestrator | Saturday 20 September 2025 09:34:34 +0000 (0:00:00.150) 0:00:49.522 **** 2025-09-20 09:34:35.861020 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:35.861026 | orchestrator | 2025-09-20 09:34:35.861033 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-20 09:34:35.861040 | orchestrator | Saturday 20 September 2025 09:34:34 +0000 (0:00:00.515) 0:00:50.038 **** 2025-09-20 09:34:35.861047 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:35.861054 | orchestrator | 2025-09-20 09:34:35.861061 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-20 09:34:35.861068 | orchestrator | Saturday 20 September 2025 09:34:35 +0000 (0:00:00.549) 0:00:50.587 **** 2025-09-20 09:34:35.861075 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:34:35.861082 | orchestrator | 2025-09-20 09:34:35.861089 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-20 09:34:35.861096 | orchestrator | Saturday 20 September 2025 09:34:35 +0000 (0:00:00.159) 0:00:50.747 **** 2025-09-20 09:34:35.861103 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'vg_name': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}) 2025-09-20 09:34:35.861112 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'vg_name': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}) 2025-09-20 09:34:35.861119 | orchestrator | 2025-09-20 09:34:35.861126 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-20 09:34:35.861133 | orchestrator | Saturday 20 September 2025 09:34:35 +0000 (0:00:00.173) 0:00:50.921 **** 2025-09-20 09:34:35.861140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.861147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.861154 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:35.861161 | orchestrator | 2025-09-20 09:34:35.861168 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-20 09:34:35.861175 | orchestrator | Saturday 20 September 2025 09:34:35 +0000 (0:00:00.188) 0:00:51.110 **** 2025-09-20 09:34:35.861182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:35.861189 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:35.861200 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:42.055800 | orchestrator | 2025-09-20 09:34:42.055915 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-20 09:34:42.055932 | orchestrator | Saturday 20 September 2025 09:34:35 +0000 (0:00:00.167) 0:00:51.277 **** 2025-09-20 09:34:42.055945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'})  2025-09-20 09:34:42.055958 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'})  2025-09-20 09:34:42.055969 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:34:42.055981 | orchestrator | 2025-09-20 09:34:42.055992 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-20 09:34:42.056003 | orchestrator | Saturday 20 September 2025 09:34:36 +0000 (0:00:00.180) 0:00:51.458 **** 2025-09-20 09:34:42.056038 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:34:42.056050 | orchestrator |  "lvm_report": { 2025-09-20 09:34:42.056064 | orchestrator |  "lv": [ 2025-09-20 09:34:42.056075 | orchestrator |  { 2025-09-20 09:34:42.056086 | orchestrator |  "lv_name": "osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9", 2025-09-20 09:34:42.056098 | orchestrator |  "vg_name": "ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9" 2025-09-20 09:34:42.056108 | orchestrator |  }, 2025-09-20 09:34:42.056119 | orchestrator |  { 2025-09-20 09:34:42.056130 | orchestrator |  "lv_name": "osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c", 2025-09-20 09:34:42.056140 | orchestrator |  "vg_name": "ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c" 2025-09-20 09:34:42.056151 | orchestrator |  } 2025-09-20 09:34:42.056162 | orchestrator |  ], 2025-09-20 09:34:42.056172 | orchestrator |  "pv": [ 2025-09-20 09:34:42.056183 | orchestrator |  { 2025-09-20 09:34:42.056193 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-20 09:34:42.056204 | orchestrator |  "vg_name": "ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c" 2025-09-20 09:34:42.056215 | orchestrator |  }, 2025-09-20 09:34:42.056225 | orchestrator |  { 2025-09-20 09:34:42.056236 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-20 09:34:42.056247 | orchestrator |  "vg_name": "ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9" 2025-09-20 09:34:42.056257 | orchestrator |  } 2025-09-20 09:34:42.056268 | orchestrator |  ] 2025-09-20 09:34:42.056316 | orchestrator |  } 2025-09-20 09:34:42.056329 | orchestrator | } 2025-09-20 09:34:42.056342 | orchestrator | 2025-09-20 09:34:42.056355 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-20 09:34:42.056368 | orchestrator | 2025-09-20 09:34:42.056380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 09:34:42.056393 | orchestrator | Saturday 20 September 2025 09:34:36 +0000 (0:00:00.561) 0:00:52.019 **** 2025-09-20 09:34:42.056406 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-20 09:34:42.056418 | orchestrator | 2025-09-20 09:34:42.056430 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-20 09:34:42.056441 | orchestrator | Saturday 20 September 2025 09:34:36 +0000 (0:00:00.257) 0:00:52.277 **** 2025-09-20 09:34:42.056452 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:34:42.056463 | orchestrator | 2025-09-20 09:34:42.056474 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056485 | orchestrator | Saturday 20 September 2025 09:34:37 +0000 (0:00:00.242) 0:00:52.520 **** 2025-09-20 09:34:42.056495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-20 09:34:42.056506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-20 09:34:42.056517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-20 09:34:42.056528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-20 09:34:42.056538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-20 09:34:42.056549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-20 09:34:42.056559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-20 09:34:42.056570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-20 09:34:42.056580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-20 09:34:42.056591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-20 09:34:42.056601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-20 09:34:42.056620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-20 09:34:42.056631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-20 09:34:42.056642 | orchestrator | 2025-09-20 09:34:42.056652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056663 | orchestrator | Saturday 20 September 2025 09:34:37 +0000 (0:00:00.396) 0:00:52.916 **** 2025-09-20 09:34:42.056674 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.056688 | orchestrator | 2025-09-20 09:34:42.056699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056710 | orchestrator | Saturday 20 September 2025 09:34:37 +0000 (0:00:00.191) 0:00:53.108 **** 2025-09-20 09:34:42.056721 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.056731 | orchestrator | 2025-09-20 09:34:42.056742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056771 | orchestrator | Saturday 20 September 2025 09:34:37 +0000 (0:00:00.190) 0:00:53.299 **** 2025-09-20 09:34:42.056783 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.056794 | orchestrator | 2025-09-20 09:34:42.056805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056815 | orchestrator | Saturday 20 September 2025 09:34:38 +0000 (0:00:00.189) 0:00:53.488 **** 2025-09-20 09:34:42.056826 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.056836 | orchestrator | 2025-09-20 09:34:42.056847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056858 | orchestrator | Saturday 20 September 2025 09:34:38 +0000 (0:00:00.198) 0:00:53.687 **** 2025-09-20 09:34:42.056868 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.056879 | orchestrator | 2025-09-20 09:34:42.056939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056952 | orchestrator | Saturday 20 September 2025 09:34:38 +0000 (0:00:00.210) 0:00:53.897 **** 2025-09-20 09:34:42.056963 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.056973 | orchestrator | 2025-09-20 09:34:42.056984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.056995 | orchestrator | Saturday 20 September 2025 09:34:39 +0000 (0:00:00.556) 0:00:54.454 **** 2025-09-20 09:34:42.057005 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.057016 | orchestrator | 2025-09-20 09:34:42.057027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.057037 | orchestrator | Saturday 20 September 2025 09:34:39 +0000 (0:00:00.220) 0:00:54.674 **** 2025-09-20 09:34:42.057048 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:42.057058 | orchestrator | 2025-09-20 09:34:42.057069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.057080 | orchestrator | Saturday 20 September 2025 09:34:39 +0000 (0:00:00.195) 0:00:54.870 **** 2025-09-20 09:34:42.057090 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8) 2025-09-20 09:34:42.057102 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8) 2025-09-20 09:34:42.057113 | orchestrator | 2025-09-20 09:34:42.057124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.057134 | orchestrator | Saturday 20 September 2025 09:34:39 +0000 (0:00:00.465) 0:00:55.335 **** 2025-09-20 09:34:42.057145 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57) 2025-09-20 09:34:42.057155 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57) 2025-09-20 09:34:42.057166 | orchestrator | 2025-09-20 09:34:42.057177 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.057187 | orchestrator | Saturday 20 September 2025 09:34:40 +0000 (0:00:00.402) 0:00:55.737 **** 2025-09-20 09:34:42.057211 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5) 2025-09-20 09:34:42.057222 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5) 2025-09-20 09:34:42.057233 | orchestrator | 2025-09-20 09:34:42.057244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.057254 | orchestrator | Saturday 20 September 2025 09:34:40 +0000 (0:00:00.477) 0:00:56.215 **** 2025-09-20 09:34:42.057265 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d) 2025-09-20 09:34:42.057292 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d) 2025-09-20 09:34:42.057304 | orchestrator | 2025-09-20 09:34:42.057314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-20 09:34:42.057325 | orchestrator | Saturday 20 September 2025 09:34:41 +0000 (0:00:00.451) 0:00:56.667 **** 2025-09-20 09:34:42.057336 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-20 09:34:42.057346 | orchestrator | 2025-09-20 09:34:42.057357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:42.057367 | orchestrator | Saturday 20 September 2025 09:34:41 +0000 (0:00:00.320) 0:00:56.987 **** 2025-09-20 09:34:42.057378 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-20 09:34:42.057388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-20 09:34:42.057399 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-20 09:34:42.057410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-20 09:34:42.057420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-20 09:34:42.057431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-20 09:34:42.057441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-20 09:34:42.057452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-20 09:34:42.057462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-20 09:34:42.057473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-20 09:34:42.057483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-20 09:34:42.057501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-20 09:34:51.294363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-20 09:34:51.294482 | orchestrator | 2025-09-20 09:34:51.294498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294511 | orchestrator | Saturday 20 September 2025 09:34:42 +0000 (0:00:00.478) 0:00:57.466 **** 2025-09-20 09:34:51.294522 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294534 | orchestrator | 2025-09-20 09:34:51.294545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294556 | orchestrator | Saturday 20 September 2025 09:34:42 +0000 (0:00:00.222) 0:00:57.688 **** 2025-09-20 09:34:51.294566 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294577 | orchestrator | 2025-09-20 09:34:51.294587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294598 | orchestrator | Saturday 20 September 2025 09:34:42 +0000 (0:00:00.207) 0:00:57.895 **** 2025-09-20 09:34:51.294609 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294619 | orchestrator | 2025-09-20 09:34:51.294630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294665 | orchestrator | Saturday 20 September 2025 09:34:43 +0000 (0:00:00.659) 0:00:58.555 **** 2025-09-20 09:34:51.294676 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294687 | orchestrator | 2025-09-20 09:34:51.294698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294708 | orchestrator | Saturday 20 September 2025 09:34:43 +0000 (0:00:00.205) 0:00:58.760 **** 2025-09-20 09:34:51.294719 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294729 | orchestrator | 2025-09-20 09:34:51.294740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294750 | orchestrator | Saturday 20 September 2025 09:34:43 +0000 (0:00:00.206) 0:00:58.966 **** 2025-09-20 09:34:51.294761 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294771 | orchestrator | 2025-09-20 09:34:51.294782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294792 | orchestrator | Saturday 20 September 2025 09:34:43 +0000 (0:00:00.206) 0:00:59.173 **** 2025-09-20 09:34:51.294803 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294813 | orchestrator | 2025-09-20 09:34:51.294824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294834 | orchestrator | Saturday 20 September 2025 09:34:43 +0000 (0:00:00.207) 0:00:59.380 **** 2025-09-20 09:34:51.294845 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.294855 | orchestrator | 2025-09-20 09:34:51.294866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294878 | orchestrator | Saturday 20 September 2025 09:34:44 +0000 (0:00:00.205) 0:00:59.586 **** 2025-09-20 09:34:51.294890 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-20 09:34:51.294903 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-20 09:34:51.294930 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-20 09:34:51.294944 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-20 09:34:51.294956 | orchestrator | 2025-09-20 09:34:51.294968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.294980 | orchestrator | Saturday 20 September 2025 09:34:44 +0000 (0:00:00.658) 0:01:00.244 **** 2025-09-20 09:34:51.294993 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295005 | orchestrator | 2025-09-20 09:34:51.295017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.295030 | orchestrator | Saturday 20 September 2025 09:34:45 +0000 (0:00:00.219) 0:01:00.464 **** 2025-09-20 09:34:51.295042 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295053 | orchestrator | 2025-09-20 09:34:51.295067 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.295078 | orchestrator | Saturday 20 September 2025 09:34:45 +0000 (0:00:00.218) 0:01:00.682 **** 2025-09-20 09:34:51.295091 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295103 | orchestrator | 2025-09-20 09:34:51.295115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-20 09:34:51.295127 | orchestrator | Saturday 20 September 2025 09:34:45 +0000 (0:00:00.216) 0:01:00.899 **** 2025-09-20 09:34:51.295139 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295151 | orchestrator | 2025-09-20 09:34:51.295163 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-20 09:34:51.295175 | orchestrator | Saturday 20 September 2025 09:34:45 +0000 (0:00:00.217) 0:01:01.117 **** 2025-09-20 09:34:51.295186 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295198 | orchestrator | 2025-09-20 09:34:51.295211 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-20 09:34:51.295224 | orchestrator | Saturday 20 September 2025 09:34:46 +0000 (0:00:00.386) 0:01:01.504 **** 2025-09-20 09:34:51.295234 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a0e476ce-8dbb-5cb3-b205-e96c67f25126'}}) 2025-09-20 09:34:51.295245 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '54d5d251-b5b9-5293-b72e-54d20a6e98e4'}}) 2025-09-20 09:34:51.295284 | orchestrator | 2025-09-20 09:34:51.295297 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-20 09:34:51.295307 | orchestrator | Saturday 20 September 2025 09:34:46 +0000 (0:00:00.198) 0:01:01.703 **** 2025-09-20 09:34:51.295319 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'}) 2025-09-20 09:34:51.295330 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'}) 2025-09-20 09:34:51.295341 | orchestrator | 2025-09-20 09:34:51.295352 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-20 09:34:51.295380 | orchestrator | Saturday 20 September 2025 09:34:48 +0000 (0:00:01.821) 0:01:03.524 **** 2025-09-20 09:34:51.295392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:51.295404 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:51.295415 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295425 | orchestrator | 2025-09-20 09:34:51.295436 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-20 09:34:51.295447 | orchestrator | Saturday 20 September 2025 09:34:48 +0000 (0:00:00.155) 0:01:03.680 **** 2025-09-20 09:34:51.295457 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'}) 2025-09-20 09:34:51.295468 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'}) 2025-09-20 09:34:51.295480 | orchestrator | 2025-09-20 09:34:51.295490 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-20 09:34:51.295501 | orchestrator | Saturday 20 September 2025 09:34:49 +0000 (0:00:01.303) 0:01:04.984 **** 2025-09-20 09:34:51.295512 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:51.295523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:51.295533 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295544 | orchestrator | 2025-09-20 09:34:51.295555 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-20 09:34:51.295565 | orchestrator | Saturday 20 September 2025 09:34:49 +0000 (0:00:00.168) 0:01:05.152 **** 2025-09-20 09:34:51.295576 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295586 | orchestrator | 2025-09-20 09:34:51.295597 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-20 09:34:51.295607 | orchestrator | Saturday 20 September 2025 09:34:49 +0000 (0:00:00.176) 0:01:05.329 **** 2025-09-20 09:34:51.295618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:51.295634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:51.295645 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295656 | orchestrator | 2025-09-20 09:34:51.295667 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-20 09:34:51.295677 | orchestrator | Saturday 20 September 2025 09:34:50 +0000 (0:00:00.158) 0:01:05.487 **** 2025-09-20 09:34:51.295688 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295706 | orchestrator | 2025-09-20 09:34:51.295717 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-20 09:34:51.295728 | orchestrator | Saturday 20 September 2025 09:34:50 +0000 (0:00:00.149) 0:01:05.636 **** 2025-09-20 09:34:51.295738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:51.295749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:51.295760 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295770 | orchestrator | 2025-09-20 09:34:51.295781 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-20 09:34:51.295793 | orchestrator | Saturday 20 September 2025 09:34:50 +0000 (0:00:00.159) 0:01:05.796 **** 2025-09-20 09:34:51.295812 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295831 | orchestrator | 2025-09-20 09:34:51.295862 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-20 09:34:51.295882 | orchestrator | Saturday 20 September 2025 09:34:50 +0000 (0:00:00.149) 0:01:05.946 **** 2025-09-20 09:34:51.295900 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:51.295918 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:51.295937 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:51.295956 | orchestrator | 2025-09-20 09:34:51.295973 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-20 09:34:51.295990 | orchestrator | Saturday 20 September 2025 09:34:50 +0000 (0:00:00.204) 0:01:06.150 **** 2025-09-20 09:34:51.296009 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:34:51.296028 | orchestrator | 2025-09-20 09:34:51.296047 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-20 09:34:51.296065 | orchestrator | Saturday 20 September 2025 09:34:51 +0000 (0:00:00.380) 0:01:06.530 **** 2025-09-20 09:34:51.296090 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:57.667833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:57.667946 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.667962 | orchestrator | 2025-09-20 09:34:57.667975 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-20 09:34:57.667987 | orchestrator | Saturday 20 September 2025 09:34:51 +0000 (0:00:00.179) 0:01:06.710 **** 2025-09-20 09:34:57.667999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:57.668010 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:57.668021 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668032 | orchestrator | 2025-09-20 09:34:57.668044 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-20 09:34:57.668055 | orchestrator | Saturday 20 September 2025 09:34:51 +0000 (0:00:00.192) 0:01:06.903 **** 2025-09-20 09:34:57.668065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:57.668076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:57.668087 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668120 | orchestrator | 2025-09-20 09:34:57.668132 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-20 09:34:57.668143 | orchestrator | Saturday 20 September 2025 09:34:51 +0000 (0:00:00.156) 0:01:07.059 **** 2025-09-20 09:34:57.668154 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668165 | orchestrator | 2025-09-20 09:34:57.668175 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-20 09:34:57.668186 | orchestrator | Saturday 20 September 2025 09:34:51 +0000 (0:00:00.151) 0:01:07.211 **** 2025-09-20 09:34:57.668197 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668207 | orchestrator | 2025-09-20 09:34:57.668218 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-20 09:34:57.668229 | orchestrator | Saturday 20 September 2025 09:34:51 +0000 (0:00:00.134) 0:01:07.345 **** 2025-09-20 09:34:57.668240 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668250 | orchestrator | 2025-09-20 09:34:57.668295 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-20 09:34:57.668307 | orchestrator | Saturday 20 September 2025 09:34:52 +0000 (0:00:00.145) 0:01:07.491 **** 2025-09-20 09:34:57.668318 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:34:57.668329 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-20 09:34:57.668340 | orchestrator | } 2025-09-20 09:34:57.668351 | orchestrator | 2025-09-20 09:34:57.668362 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-20 09:34:57.668373 | orchestrator | Saturday 20 September 2025 09:34:52 +0000 (0:00:00.152) 0:01:07.643 **** 2025-09-20 09:34:57.668384 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:34:57.668395 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-20 09:34:57.668405 | orchestrator | } 2025-09-20 09:34:57.668416 | orchestrator | 2025-09-20 09:34:57.668426 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-20 09:34:57.668438 | orchestrator | Saturday 20 September 2025 09:34:52 +0000 (0:00:00.162) 0:01:07.806 **** 2025-09-20 09:34:57.668448 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:34:57.668459 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-20 09:34:57.668470 | orchestrator | } 2025-09-20 09:34:57.668481 | orchestrator | 2025-09-20 09:34:57.668492 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-20 09:34:57.668502 | orchestrator | Saturday 20 September 2025 09:34:52 +0000 (0:00:00.153) 0:01:07.960 **** 2025-09-20 09:34:57.668513 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:34:57.668524 | orchestrator | 2025-09-20 09:34:57.668535 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-20 09:34:57.668545 | orchestrator | Saturday 20 September 2025 09:34:53 +0000 (0:00:00.529) 0:01:08.489 **** 2025-09-20 09:34:57.668556 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:34:57.668566 | orchestrator | 2025-09-20 09:34:57.668577 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-20 09:34:57.668588 | orchestrator | Saturday 20 September 2025 09:34:53 +0000 (0:00:00.530) 0:01:09.020 **** 2025-09-20 09:34:57.668598 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:34:57.668609 | orchestrator | 2025-09-20 09:34:57.668620 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-20 09:34:57.668630 | orchestrator | Saturday 20 September 2025 09:34:54 +0000 (0:00:00.761) 0:01:09.782 **** 2025-09-20 09:34:57.668641 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:34:57.668652 | orchestrator | 2025-09-20 09:34:57.668662 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-20 09:34:57.668673 | orchestrator | Saturday 20 September 2025 09:34:54 +0000 (0:00:00.155) 0:01:09.937 **** 2025-09-20 09:34:57.668684 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668694 | orchestrator | 2025-09-20 09:34:57.668705 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-20 09:34:57.668715 | orchestrator | Saturday 20 September 2025 09:34:54 +0000 (0:00:00.126) 0:01:10.063 **** 2025-09-20 09:34:57.668734 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668745 | orchestrator | 2025-09-20 09:34:57.668755 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-20 09:34:57.668766 | orchestrator | Saturday 20 September 2025 09:34:54 +0000 (0:00:00.118) 0:01:10.182 **** 2025-09-20 09:34:57.668777 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:34:57.668807 | orchestrator |  "vgs_report": { 2025-09-20 09:34:57.668820 | orchestrator |  "vg": [] 2025-09-20 09:34:57.668848 | orchestrator |  } 2025-09-20 09:34:57.668860 | orchestrator | } 2025-09-20 09:34:57.668871 | orchestrator | 2025-09-20 09:34:57.668882 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-20 09:34:57.668892 | orchestrator | Saturday 20 September 2025 09:34:54 +0000 (0:00:00.149) 0:01:10.331 **** 2025-09-20 09:34:57.668903 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668913 | orchestrator | 2025-09-20 09:34:57.668924 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-20 09:34:57.668934 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.153) 0:01:10.485 **** 2025-09-20 09:34:57.668945 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668955 | orchestrator | 2025-09-20 09:34:57.668966 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-20 09:34:57.668977 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.151) 0:01:10.637 **** 2025-09-20 09:34:57.668987 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.668998 | orchestrator | 2025-09-20 09:34:57.669008 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-20 09:34:57.669019 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.143) 0:01:10.780 **** 2025-09-20 09:34:57.669029 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669040 | orchestrator | 2025-09-20 09:34:57.669050 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-20 09:34:57.669061 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.150) 0:01:10.931 **** 2025-09-20 09:34:57.669071 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669082 | orchestrator | 2025-09-20 09:34:57.669093 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-20 09:34:57.669103 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.147) 0:01:11.078 **** 2025-09-20 09:34:57.669113 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669124 | orchestrator | 2025-09-20 09:34:57.669134 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-20 09:34:57.669145 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.121) 0:01:11.200 **** 2025-09-20 09:34:57.669156 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669166 | orchestrator | 2025-09-20 09:34:57.669177 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-20 09:34:57.669187 | orchestrator | Saturday 20 September 2025 09:34:55 +0000 (0:00:00.153) 0:01:11.354 **** 2025-09-20 09:34:57.669198 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669208 | orchestrator | 2025-09-20 09:34:57.669219 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-20 09:34:57.669229 | orchestrator | Saturday 20 September 2025 09:34:56 +0000 (0:00:00.153) 0:01:11.507 **** 2025-09-20 09:34:57.669245 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669289 | orchestrator | 2025-09-20 09:34:57.669311 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-20 09:34:57.669328 | orchestrator | Saturday 20 September 2025 09:34:56 +0000 (0:00:00.366) 0:01:11.873 **** 2025-09-20 09:34:57.669339 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669350 | orchestrator | 2025-09-20 09:34:57.669360 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-20 09:34:57.669371 | orchestrator | Saturday 20 September 2025 09:34:56 +0000 (0:00:00.158) 0:01:12.032 **** 2025-09-20 09:34:57.669381 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669410 | orchestrator | 2025-09-20 09:34:57.669422 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-20 09:34:57.669432 | orchestrator | Saturday 20 September 2025 09:34:56 +0000 (0:00:00.148) 0:01:12.181 **** 2025-09-20 09:34:57.669442 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669453 | orchestrator | 2025-09-20 09:34:57.669464 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-20 09:34:57.669474 | orchestrator | Saturday 20 September 2025 09:34:56 +0000 (0:00:00.133) 0:01:12.314 **** 2025-09-20 09:34:57.669485 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669495 | orchestrator | 2025-09-20 09:34:57.669506 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-20 09:34:57.669517 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.132) 0:01:12.446 **** 2025-09-20 09:34:57.669527 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669537 | orchestrator | 2025-09-20 09:34:57.669548 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-20 09:34:57.669558 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.144) 0:01:12.591 **** 2025-09-20 09:34:57.669569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:57.669580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:57.669590 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669601 | orchestrator | 2025-09-20 09:34:57.669611 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-20 09:34:57.669622 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.168) 0:01:12.759 **** 2025-09-20 09:34:57.669632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:34:57.669643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:34:57.669654 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:34:57.669664 | orchestrator | 2025-09-20 09:34:57.669675 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-20 09:34:57.669685 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.174) 0:01:12.933 **** 2025-09-20 09:34:57.669703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.782626 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.782731 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.782747 | orchestrator | 2025-09-20 09:35:00.782760 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-20 09:35:00.782773 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.154) 0:01:13.088 **** 2025-09-20 09:35:00.782784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.782795 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.782806 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.782816 | orchestrator | 2025-09-20 09:35:00.782827 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-20 09:35:00.782838 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.170) 0:01:13.258 **** 2025-09-20 09:35:00.782848 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.782884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.782896 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.782906 | orchestrator | 2025-09-20 09:35:00.782917 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-20 09:35:00.782928 | orchestrator | Saturday 20 September 2025 09:34:57 +0000 (0:00:00.153) 0:01:13.412 **** 2025-09-20 09:35:00.782938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.782949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.782960 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.782970 | orchestrator | 2025-09-20 09:35:00.782994 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-20 09:35:00.783006 | orchestrator | Saturday 20 September 2025 09:34:58 +0000 (0:00:00.157) 0:01:13.569 **** 2025-09-20 09:35:00.783017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.783027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.783038 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.783048 | orchestrator | 2025-09-20 09:35:00.783059 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-20 09:35:00.783070 | orchestrator | Saturday 20 September 2025 09:34:58 +0000 (0:00:00.398) 0:01:13.968 **** 2025-09-20 09:35:00.783081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.783091 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.783102 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.783113 | orchestrator | 2025-09-20 09:35:00.783123 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-20 09:35:00.783134 | orchestrator | Saturday 20 September 2025 09:34:58 +0000 (0:00:00.158) 0:01:14.126 **** 2025-09-20 09:35:00.783144 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:35:00.783158 | orchestrator | 2025-09-20 09:35:00.783170 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-20 09:35:00.783183 | orchestrator | Saturday 20 September 2025 09:34:59 +0000 (0:00:00.557) 0:01:14.684 **** 2025-09-20 09:35:00.783195 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:35:00.783207 | orchestrator | 2025-09-20 09:35:00.783219 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-20 09:35:00.783232 | orchestrator | Saturday 20 September 2025 09:34:59 +0000 (0:00:00.520) 0:01:15.204 **** 2025-09-20 09:35:00.783244 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:35:00.783257 | orchestrator | 2025-09-20 09:35:00.783315 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-20 09:35:00.783333 | orchestrator | Saturday 20 September 2025 09:34:59 +0000 (0:00:00.154) 0:01:15.359 **** 2025-09-20 09:35:00.783354 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'vg_name': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'}) 2025-09-20 09:35:00.783374 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'vg_name': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'}) 2025-09-20 09:35:00.783395 | orchestrator | 2025-09-20 09:35:00.783415 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-20 09:35:00.783451 | orchestrator | Saturday 20 September 2025 09:35:00 +0000 (0:00:00.173) 0:01:15.532 **** 2025-09-20 09:35:00.783490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.783511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.783531 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.783549 | orchestrator | 2025-09-20 09:35:00.783567 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-20 09:35:00.783584 | orchestrator | Saturday 20 September 2025 09:35:00 +0000 (0:00:00.156) 0:01:15.688 **** 2025-09-20 09:35:00.783604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.783622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.783641 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.783660 | orchestrator | 2025-09-20 09:35:00.783680 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-20 09:35:00.783699 | orchestrator | Saturday 20 September 2025 09:35:00 +0000 (0:00:00.167) 0:01:15.856 **** 2025-09-20 09:35:00.783712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'})  2025-09-20 09:35:00.783723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'})  2025-09-20 09:35:00.783734 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:00.783745 | orchestrator | 2025-09-20 09:35:00.783755 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-20 09:35:00.783766 | orchestrator | Saturday 20 September 2025 09:35:00 +0000 (0:00:00.163) 0:01:16.019 **** 2025-09-20 09:35:00.783776 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:35:00.783787 | orchestrator |  "lvm_report": { 2025-09-20 09:35:00.783799 | orchestrator |  "lv": [ 2025-09-20 09:35:00.783810 | orchestrator |  { 2025-09-20 09:35:00.783821 | orchestrator |  "lv_name": "osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4", 2025-09-20 09:35:00.783840 | orchestrator |  "vg_name": "ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4" 2025-09-20 09:35:00.783851 | orchestrator |  }, 2025-09-20 09:35:00.783862 | orchestrator |  { 2025-09-20 09:35:00.783872 | orchestrator |  "lv_name": "osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126", 2025-09-20 09:35:00.783883 | orchestrator |  "vg_name": "ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126" 2025-09-20 09:35:00.783894 | orchestrator |  } 2025-09-20 09:35:00.783904 | orchestrator |  ], 2025-09-20 09:35:00.783915 | orchestrator |  "pv": [ 2025-09-20 09:35:00.783925 | orchestrator |  { 2025-09-20 09:35:00.783936 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-20 09:35:00.783947 | orchestrator |  "vg_name": "ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126" 2025-09-20 09:35:00.783957 | orchestrator |  }, 2025-09-20 09:35:00.783968 | orchestrator |  { 2025-09-20 09:35:00.783978 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-20 09:35:00.783989 | orchestrator |  "vg_name": "ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4" 2025-09-20 09:35:00.784000 | orchestrator |  } 2025-09-20 09:35:00.784010 | orchestrator |  ] 2025-09-20 09:35:00.784021 | orchestrator |  } 2025-09-20 09:35:00.784032 | orchestrator | } 2025-09-20 09:35:00.784043 | orchestrator | 2025-09-20 09:35:00.784054 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:35:00.784074 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-20 09:35:00.784085 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-20 09:35:00.784096 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-20 09:35:00.784107 | orchestrator | 2025-09-20 09:35:00.784117 | orchestrator | 2025-09-20 09:35:00.784128 | orchestrator | 2025-09-20 09:35:00.784139 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:35:00.784149 | orchestrator | Saturday 20 September 2025 09:35:00 +0000 (0:00:00.161) 0:01:16.181 **** 2025-09-20 09:35:00.784160 | orchestrator | =============================================================================== 2025-09-20 09:35:00.784170 | orchestrator | Create block VGs -------------------------------------------------------- 5.72s 2025-09-20 09:35:00.784181 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2025-09-20 09:35:00.784192 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2025-09-20 09:35:00.784202 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.82s 2025-09-20 09:35:00.784213 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2025-09-20 09:35:00.784223 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2025-09-20 09:35:00.784234 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-09-20 09:35:00.784245 | orchestrator | Add known partitions to the list of available block devices ------------- 1.56s 2025-09-20 09:35:00.784285 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-09-20 09:35:01.205389 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-09-20 09:35:01.205436 | orchestrator | Print LVM report data --------------------------------------------------- 1.04s 2025-09-20 09:35:01.205448 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-09-20 09:35:01.205460 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-09-20 09:35:01.205471 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-09-20 09:35:01.205482 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2025-09-20 09:35:01.205493 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.74s 2025-09-20 09:35:01.205504 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2025-09-20 09:35:01.205514 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-09-20 09:35:01.205525 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.73s 2025-09-20 09:35:01.205536 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.72s 2025-09-20 09:35:13.513443 | orchestrator | 2025-09-20 09:35:13 | INFO  | Task add3515a-2701-4151-8403-19df8edf0863 (facts) was prepared for execution. 2025-09-20 09:35:13.513540 | orchestrator | 2025-09-20 09:35:13 | INFO  | It takes a moment until task add3515a-2701-4151-8403-19df8edf0863 (facts) has been started and output is visible here. 2025-09-20 09:35:26.819203 | orchestrator | 2025-09-20 09:35:26.819380 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-20 09:35:26.819397 | orchestrator | 2025-09-20 09:35:26.819408 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 09:35:26.819418 | orchestrator | Saturday 20 September 2025 09:35:17 +0000 (0:00:00.283) 0:00:00.283 **** 2025-09-20 09:35:26.819428 | orchestrator | ok: [testbed-manager] 2025-09-20 09:35:26.819439 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:35:26.819472 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:35:26.819483 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:35:26.819492 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:35:26.819502 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:35:26.819511 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:35:26.819521 | orchestrator | 2025-09-20 09:35:26.819531 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 09:35:26.819540 | orchestrator | Saturday 20 September 2025 09:35:18 +0000 (0:00:01.130) 0:00:01.413 **** 2025-09-20 09:35:26.819550 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:35:26.819560 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:35:26.819571 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:35:26.819580 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:35:26.819590 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:35:26.819599 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:35:26.819609 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:26.819618 | orchestrator | 2025-09-20 09:35:26.819628 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 09:35:26.819638 | orchestrator | 2025-09-20 09:35:26.819647 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 09:35:26.819657 | orchestrator | Saturday 20 September 2025 09:35:19 +0000 (0:00:01.283) 0:00:02.696 **** 2025-09-20 09:35:26.819666 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:35:26.819676 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:35:26.819686 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:35:26.819695 | orchestrator | ok: [testbed-manager] 2025-09-20 09:35:26.819705 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:35:26.819714 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:35:26.819724 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:35:26.819733 | orchestrator | 2025-09-20 09:35:26.819743 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 09:35:26.819754 | orchestrator | 2025-09-20 09:35:26.819766 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 09:35:26.819777 | orchestrator | Saturday 20 September 2025 09:35:25 +0000 (0:00:05.846) 0:00:08.543 **** 2025-09-20 09:35:26.819788 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:35:26.819799 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:35:26.819810 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:35:26.819821 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:35:26.819832 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:35:26.819843 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:35:26.819854 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:35:26.819865 | orchestrator | 2025-09-20 09:35:26.819876 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:35:26.819888 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819900 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819911 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819922 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819933 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819944 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819955 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:35:26.819972 | orchestrator | 2025-09-20 09:35:26.819983 | orchestrator | 2025-09-20 09:35:26.819995 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:35:26.820006 | orchestrator | Saturday 20 September 2025 09:35:26 +0000 (0:00:00.562) 0:00:09.106 **** 2025-09-20 09:35:26.820017 | orchestrator | =============================================================================== 2025-09-20 09:35:26.820028 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.85s 2025-09-20 09:35:26.820039 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2025-09-20 09:35:26.820050 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-09-20 09:35:26.820061 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-09-20 09:35:39.203185 | orchestrator | 2025-09-20 09:35:39 | INFO  | Task 3e7091c2-0f5e-46f1-b659-642a8bdd657f (frr) was prepared for execution. 2025-09-20 09:35:39.203322 | orchestrator | 2025-09-20 09:35:39 | INFO  | It takes a moment until task 3e7091c2-0f5e-46f1-b659-642a8bdd657f (frr) has been started and output is visible here. 2025-09-20 09:36:06.848172 | orchestrator | 2025-09-20 09:36:06.848316 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-20 09:36:06.848332 | orchestrator | 2025-09-20 09:36:06.848344 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-20 09:36:06.848356 | orchestrator | Saturday 20 September 2025 09:35:43 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-20 09:36:06.848385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 09:36:06.848398 | orchestrator | 2025-09-20 09:36:06.848409 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-20 09:36:06.848420 | orchestrator | Saturday 20 September 2025 09:35:43 +0000 (0:00:00.228) 0:00:00.485 **** 2025-09-20 09:36:06.848431 | orchestrator | changed: [testbed-manager] 2025-09-20 09:36:06.848443 | orchestrator | 2025-09-20 09:36:06.848454 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-20 09:36:06.848465 | orchestrator | Saturday 20 September 2025 09:35:44 +0000 (0:00:01.151) 0:00:01.637 **** 2025-09-20 09:36:06.848476 | orchestrator | changed: [testbed-manager] 2025-09-20 09:36:06.848486 | orchestrator | 2025-09-20 09:36:06.848502 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-20 09:36:06.848513 | orchestrator | Saturday 20 September 2025 09:35:55 +0000 (0:00:10.697) 0:00:12.335 **** 2025-09-20 09:36:06.848523 | orchestrator | ok: [testbed-manager] 2025-09-20 09:36:06.848535 | orchestrator | 2025-09-20 09:36:06.848546 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-20 09:36:06.848557 | orchestrator | Saturday 20 September 2025 09:35:56 +0000 (0:00:01.314) 0:00:13.650 **** 2025-09-20 09:36:06.848567 | orchestrator | changed: [testbed-manager] 2025-09-20 09:36:06.848578 | orchestrator | 2025-09-20 09:36:06.848589 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-20 09:36:06.848600 | orchestrator | Saturday 20 September 2025 09:35:57 +0000 (0:00:00.982) 0:00:14.632 **** 2025-09-20 09:36:06.848610 | orchestrator | ok: [testbed-manager] 2025-09-20 09:36:06.848621 | orchestrator | 2025-09-20 09:36:06.848632 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-20 09:36:06.848643 | orchestrator | Saturday 20 September 2025 09:35:58 +0000 (0:00:01.190) 0:00:15.823 **** 2025-09-20 09:36:06.848654 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:36:06.848664 | orchestrator | 2025-09-20 09:36:06.848675 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-20 09:36:06.848686 | orchestrator | Saturday 20 September 2025 09:35:59 +0000 (0:00:00.811) 0:00:16.635 **** 2025-09-20 09:36:06.848697 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:36:06.848707 | orchestrator | 2025-09-20 09:36:06.848719 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-20 09:36:06.848758 | orchestrator | Saturday 20 September 2025 09:35:59 +0000 (0:00:00.130) 0:00:16.766 **** 2025-09-20 09:36:06.848772 | orchestrator | changed: [testbed-manager] 2025-09-20 09:36:06.848784 | orchestrator | 2025-09-20 09:36:06.848798 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-20 09:36:06.848811 | orchestrator | Saturday 20 September 2025 09:36:00 +0000 (0:00:00.952) 0:00:17.718 **** 2025-09-20 09:36:06.848824 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-20 09:36:06.848837 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-20 09:36:06.848850 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-20 09:36:06.848863 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-20 09:36:06.848877 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-20 09:36:06.848890 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-20 09:36:06.848902 | orchestrator | 2025-09-20 09:36:06.848915 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-20 09:36:06.848928 | orchestrator | Saturday 20 September 2025 09:36:02 +0000 (0:00:02.002) 0:00:19.721 **** 2025-09-20 09:36:06.848940 | orchestrator | ok: [testbed-manager] 2025-09-20 09:36:06.848953 | orchestrator | 2025-09-20 09:36:06.848965 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-20 09:36:06.848978 | orchestrator | Saturday 20 September 2025 09:36:05 +0000 (0:00:02.381) 0:00:22.103 **** 2025-09-20 09:36:06.848990 | orchestrator | changed: [testbed-manager] 2025-09-20 09:36:06.849003 | orchestrator | 2025-09-20 09:36:06.849016 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:36:06.849033 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:36:06.849052 | orchestrator | 2025-09-20 09:36:06.849071 | orchestrator | 2025-09-20 09:36:06.849088 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:36:06.849106 | orchestrator | Saturday 20 September 2025 09:36:06 +0000 (0:00:01.383) 0:00:23.486 **** 2025-09-20 09:36:06.849124 | orchestrator | =============================================================================== 2025-09-20 09:36:06.849142 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.70s 2025-09-20 09:36:06.849159 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.38s 2025-09-20 09:36:06.849170 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.00s 2025-09-20 09:36:06.849181 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.38s 2025-09-20 09:36:06.849209 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.31s 2025-09-20 09:36:06.849221 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2025-09-20 09:36:06.849231 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.15s 2025-09-20 09:36:06.849242 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.98s 2025-09-20 09:36:06.849274 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.95s 2025-09-20 09:36:06.849285 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.81s 2025-09-20 09:36:06.849296 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2025-09-20 09:36:06.849307 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.13s 2025-09-20 09:36:07.135499 | orchestrator | 2025-09-20 09:36:07.137965 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Sep 20 09:36:07 UTC 2025 2025-09-20 09:36:07.138012 | orchestrator | 2025-09-20 09:36:08.962626 | orchestrator | 2025-09-20 09:36:08 | INFO  | Collection nutshell is prepared for execution 2025-09-20 09:36:08.962723 | orchestrator | 2025-09-20 09:36:08 | INFO  | D [0] - dotfiles 2025-09-20 09:36:19.092967 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [0] - homer 2025-09-20 09:36:19.093074 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [0] - netdata 2025-09-20 09:36:19.093089 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [0] - openstackclient 2025-09-20 09:36:19.093101 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [0] - phpmyadmin 2025-09-20 09:36:19.093112 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [0] - common 2025-09-20 09:36:19.097799 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [1] -- loadbalancer 2025-09-20 09:36:19.097826 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [2] --- opensearch 2025-09-20 09:36:19.097989 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [2] --- mariadb-ng 2025-09-20 09:36:19.098460 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [3] ---- horizon 2025-09-20 09:36:19.098834 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [3] ---- keystone 2025-09-20 09:36:19.099044 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [4] ----- neutron 2025-09-20 09:36:19.099655 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [5] ------ wait-for-nova 2025-09-20 09:36:19.099682 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [5] ------ octavia 2025-09-20 09:36:19.101961 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [4] ----- barbican 2025-09-20 09:36:19.102094 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [4] ----- designate 2025-09-20 09:36:19.102113 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [4] ----- ironic 2025-09-20 09:36:19.102125 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [4] ----- placement 2025-09-20 09:36:19.102136 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [4] ----- magnum 2025-09-20 09:36:19.102889 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [1] -- openvswitch 2025-09-20 09:36:19.102914 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [2] --- ovn 2025-09-20 09:36:19.103285 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [1] -- memcached 2025-09-20 09:36:19.103458 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [1] -- redis 2025-09-20 09:36:19.103781 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [1] -- rabbitmq-ng 2025-09-20 09:36:19.104332 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [0] - kubernetes 2025-09-20 09:36:19.106916 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [1] -- kubeconfig 2025-09-20 09:36:19.106939 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [1] -- copy-kubeconfig 2025-09-20 09:36:19.107154 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [0] - ceph 2025-09-20 09:36:19.109452 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [1] -- ceph-pools 2025-09-20 09:36:19.109474 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [2] --- copy-ceph-keys 2025-09-20 09:36:19.109731 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [3] ---- cephclient 2025-09-20 09:36:19.109753 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-20 09:36:19.109765 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [4] ----- wait-for-keystone 2025-09-20 09:36:19.110273 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-20 09:36:19.110295 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [5] ------ glance 2025-09-20 09:36:19.110451 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [5] ------ cinder 2025-09-20 09:36:19.110471 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [5] ------ nova 2025-09-20 09:36:19.110820 | orchestrator | 2025-09-20 09:36:19 | INFO  | A [4] ----- prometheus 2025-09-20 09:36:19.110962 | orchestrator | 2025-09-20 09:36:19 | INFO  | D [5] ------ grafana 2025-09-20 09:36:19.293704 | orchestrator | 2025-09-20 09:36:19 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-20 09:36:19.293814 | orchestrator | 2025-09-20 09:36:19 | INFO  | Tasks are running in the background 2025-09-20 09:36:22.090694 | orchestrator | 2025-09-20 09:36:22 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-20 09:36:24.235777 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:24.235902 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:24.238216 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:24.238243 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:24.238441 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:24.238455 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:24.238499 | orchestrator | 2025-09-20 09:36:24 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:24.238513 | orchestrator | 2025-09-20 09:36:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:27.295742 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:27.297387 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:27.300002 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:27.300682 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:27.301276 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:27.301842 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:27.303390 | orchestrator | 2025-09-20 09:36:27 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:27.303413 | orchestrator | 2025-09-20 09:36:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:30.345864 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:30.345987 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:30.346098 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:30.346285 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:30.346764 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:30.348656 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:30.350549 | orchestrator | 2025-09-20 09:36:30 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:30.350576 | orchestrator | 2025-09-20 09:36:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:33.422680 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:33.427863 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:33.428367 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:33.428856 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:33.433104 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:33.433132 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:33.433144 | orchestrator | 2025-09-20 09:36:33 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:33.433156 | orchestrator | 2025-09-20 09:36:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:36.492878 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:36.492992 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:36.493005 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:36.493016 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:36.493026 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:36.493036 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:36.493045 | orchestrator | 2025-09-20 09:36:36 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:36.493055 | orchestrator | 2025-09-20 09:36:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:39.856181 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:39.856356 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:39.856372 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:39.856384 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:39.856395 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:39.856406 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:39.856417 | orchestrator | 2025-09-20 09:36:39 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:39.856429 | orchestrator | 2025-09-20 09:36:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:42.887730 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:42.887853 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:42.887870 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state STARTED 2025-09-20 09:36:42.887884 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:42.889604 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:42.909002 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:42.917107 | orchestrator | 2025-09-20 09:36:42 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:42.917130 | orchestrator | 2025-09-20 09:36:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:46.006141 | orchestrator | 2025-09-20 09:36:46.006248 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-20 09:36:46.006286 | orchestrator | 2025-09-20 09:36:46.006297 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-20 09:36:46.006306 | orchestrator | Saturday 20 September 2025 09:36:31 +0000 (0:00:01.106) 0:00:01.106 **** 2025-09-20 09:36:46.006315 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:36:46.006325 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:36:46.006334 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:36:46.006343 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:36:46.006352 | orchestrator | changed: [testbed-manager] 2025-09-20 09:36:46.006361 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:36:46.006369 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:36:46.006378 | orchestrator | 2025-09-20 09:36:46.006387 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-20 09:36:46.006395 | orchestrator | Saturday 20 September 2025 09:36:35 +0000 (0:00:03.495) 0:00:04.602 **** 2025-09-20 09:36:46.006405 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-20 09:36:46.006414 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-20 09:36:46.006423 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-20 09:36:46.006431 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-20 09:36:46.006439 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-20 09:36:46.006448 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-20 09:36:46.006457 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-20 09:36:46.006465 | orchestrator | 2025-09-20 09:36:46.006474 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-20 09:36:46.006484 | orchestrator | Saturday 20 September 2025 09:36:37 +0000 (0:00:01.842) 0:00:06.444 **** 2025-09-20 09:36:46.006506 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.355854', 'end': '2025-09-20 09:36:36.365348', 'delta': '0:00:00.009494', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006520 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.558871', 'end': '2025-09-20 09:36:36.570643', 'delta': '0:00:00.011772', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006551 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.366482', 'end': '2025-09-20 09:36:36.375944', 'delta': '0:00:00.009462', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006589 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.766345', 'end': '2025-09-20 09:36:36.777709', 'delta': '0:00:00.011364', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006599 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.971791', 'end': '2025-09-20 09:36:36.982652', 'delta': '0:00:00.010861', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006888 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.787832', 'end': '2025-09-20 09:36:36.800448', 'delta': '0:00:00.012616', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006899 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-20 09:36:36.853648', 'end': '2025-09-20 09:36:36.861662', 'delta': '0:00:00.008014', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-20 09:36:46.006921 | orchestrator | 2025-09-20 09:36:46.006930 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-20 09:36:46.006939 | orchestrator | Saturday 20 September 2025 09:36:38 +0000 (0:00:01.668) 0:00:08.112 **** 2025-09-20 09:36:46.006947 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-20 09:36:46.006956 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-20 09:36:46.006965 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-20 09:36:46.006973 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-20 09:36:46.006982 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-20 09:36:46.006990 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-20 09:36:46.006999 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-20 09:36:46.007007 | orchestrator | 2025-09-20 09:36:46.007016 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-20 09:36:46.007025 | orchestrator | Saturday 20 September 2025 09:36:40 +0000 (0:00:02.033) 0:00:10.147 **** 2025-09-20 09:36:46.007037 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-20 09:36:46.007047 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-20 09:36:46.007055 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-20 09:36:46.007064 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-20 09:36:46.007072 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-20 09:36:46.007081 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-20 09:36:46.007090 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-20 09:36:46.007098 | orchestrator | 2025-09-20 09:36:46.007107 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:36:46.007123 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007134 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007142 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007151 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007160 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007168 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007177 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:36:46.007185 | orchestrator | 2025-09-20 09:36:46.007340 | orchestrator | 2025-09-20 09:36:46.007350 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:36:46.007359 | orchestrator | Saturday 20 September 2025 09:36:43 +0000 (0:00:02.514) 0:00:12.661 **** 2025-09-20 09:36:46.007368 | orchestrator | =============================================================================== 2025-09-20 09:36:46.007376 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.50s 2025-09-20 09:36:46.007385 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.51s 2025-09-20 09:36:46.007403 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.04s 2025-09-20 09:36:46.007411 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.84s 2025-09-20 09:36:46.007420 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.67s 2025-09-20 09:36:46.007429 | orchestrator | 2025-09-20 09:36:45 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:46.007438 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:46.007446 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 7d792b4e-7220-4e3a-ac3c-8d68a520c43d is in state SUCCESS 2025-09-20 09:36:46.007461 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:46.008915 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:36:46.013312 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:46.013332 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:46.015000 | orchestrator | 2025-09-20 09:36:46 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:46.015017 | orchestrator | 2025-09-20 09:36:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:49.091405 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:49.093529 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:49.093555 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:49.093567 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:36:49.093577 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:49.094246 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:49.095108 | orchestrator | 2025-09-20 09:36:49 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:49.098069 | orchestrator | 2025-09-20 09:36:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:52.310967 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:52.311089 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:52.311104 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:52.311116 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:36:52.311128 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:52.311139 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:52.311150 | orchestrator | 2025-09-20 09:36:52 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:52.311162 | orchestrator | 2025-09-20 09:36:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:55.204957 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:55.207624 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:55.210569 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:55.211202 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:36:55.211764 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:55.213422 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:55.215544 | orchestrator | 2025-09-20 09:36:55 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:55.216037 | orchestrator | 2025-09-20 09:36:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:36:58.281953 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:36:58.282096 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:36:58.283106 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:36:58.285149 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:36:58.285409 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:36:58.286461 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:36:58.286930 | orchestrator | 2025-09-20 09:36:58 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:36:58.286956 | orchestrator | 2025-09-20 09:36:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:01.462800 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:01.463182 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:37:01.464002 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:01.465050 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:01.466106 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:37:01.466951 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:01.467751 | orchestrator | 2025-09-20 09:37:01 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:01.467864 | orchestrator | 2025-09-20 09:37:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:04.513677 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:04.513801 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:37:04.514755 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:04.516223 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:04.516624 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:37:04.517459 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:04.518202 | orchestrator | 2025-09-20 09:37:04 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:04.518226 | orchestrator | 2025-09-20 09:37:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:07.577963 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:07.580045 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:37:07.580080 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:07.581057 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:07.581781 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:37:07.582552 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:07.583240 | orchestrator | 2025-09-20 09:37:07 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:07.583443 | orchestrator | 2025-09-20 09:37:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:10.744662 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:10.744743 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:37:10.744754 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:10.744762 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:10.744770 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state STARTED 2025-09-20 09:37:10.744779 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:10.744787 | orchestrator | 2025-09-20 09:37:10 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:10.744796 | orchestrator | 2025-09-20 09:37:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:13.785312 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:13.785623 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state STARTED 2025-09-20 09:37:13.787137 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:13.787652 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:13.791051 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task 5a0167e3-47a2-4f9c-89a9-c142673e107d is in state SUCCESS 2025-09-20 09:37:13.793552 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:13.794512 | orchestrator | 2025-09-20 09:37:13 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:13.794616 | orchestrator | 2025-09-20 09:37:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:16.830179 | orchestrator | 2025-09-20 09:37:16 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:16.830357 | orchestrator | 2025-09-20 09:37:16 | INFO  | Task 961f63b1-ed33-41d1-9be1-e2f18f67ea27 is in state SUCCESS 2025-09-20 09:37:16.831560 | orchestrator | 2025-09-20 09:37:16 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:16.834207 | orchestrator | 2025-09-20 09:37:16 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:16.834244 | orchestrator | 2025-09-20 09:37:16 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:16.835679 | orchestrator | 2025-09-20 09:37:16 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:16.835701 | orchestrator | 2025-09-20 09:37:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:19.884890 | orchestrator | 2025-09-20 09:37:19 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:19.886201 | orchestrator | 2025-09-20 09:37:19 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:19.887458 | orchestrator | 2025-09-20 09:37:19 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:19.890855 | orchestrator | 2025-09-20 09:37:19 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:19.890903 | orchestrator | 2025-09-20 09:37:19 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:19.890916 | orchestrator | 2025-09-20 09:37:19 | INFO [0m | Wait 1 second(s) until the next check 2025-09-20 09:37:22.942894 | orchestrator | 2025-09-20 09:37:22 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:22.943020 | orchestrator | 2025-09-20 09:37:22 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:22.943391 | orchestrator | 2025-09-20 09:37:22 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:22.944146 | orchestrator | 2025-09-20 09:37:22 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:22.946303 | orchestrator | 2025-09-20 09:37:22 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:22.946402 | orchestrator | 2025-09-20 09:37:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:25.996014 | orchestrator | 2025-09-20 09:37:25 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:25.996540 | orchestrator | 2025-09-20 09:37:25 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:25.997018 | orchestrator | 2025-09-20 09:37:25 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:25.999523 | orchestrator | 2025-09-20 09:37:25 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:26.000158 | orchestrator | 2025-09-20 09:37:25 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:26.000179 | orchestrator | 2025-09-20 09:37:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:29.088224 | orchestrator | 2025-09-20 09:37:29 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:29.089161 | orchestrator | 2025-09-20 09:37:29 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:29.090169 | orchestrator | 2025-09-20 09:37:29 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:29.091107 | orchestrator | 2025-09-20 09:37:29 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:29.093708 | orchestrator | 2025-09-20 09:37:29 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:29.096090 | orchestrator | 2025-09-20 09:37:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:32.130695 | orchestrator | 2025-09-20 09:37:32 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:32.130827 | orchestrator | 2025-09-20 09:37:32 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:32.132283 | orchestrator | 2025-09-20 09:37:32 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:32.134148 | orchestrator | 2025-09-20 09:37:32 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:32.135208 | orchestrator | 2025-09-20 09:37:32 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:32.135230 | orchestrator | 2025-09-20 09:37:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:35.184804 | orchestrator | 2025-09-20 09:37:35 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:35.186681 | orchestrator | 2025-09-20 09:37:35 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:35.190493 | orchestrator | 2025-09-20 09:37:35 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:35.227402 | orchestrator | 2025-09-20 09:37:35 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:35.227440 | orchestrator | 2025-09-20 09:37:35 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:35.227453 | orchestrator | 2025-09-20 09:37:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:38.256820 | orchestrator | 2025-09-20 09:37:38 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:38.257095 | orchestrator | 2025-09-20 09:37:38 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:38.258224 | orchestrator | 2025-09-20 09:37:38 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:38.259044 | orchestrator | 2025-09-20 09:37:38 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:38.261064 | orchestrator | 2025-09-20 09:37:38 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:38.261089 | orchestrator | 2025-09-20 09:37:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:41.309406 | orchestrator | 2025-09-20 09:37:41 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:41.309908 | orchestrator | 2025-09-20 09:37:41 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:41.309940 | orchestrator | 2025-09-20 09:37:41 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state STARTED 2025-09-20 09:37:41.313801 | orchestrator | 2025-09-20 09:37:41 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:41.314389 | orchestrator | 2025-09-20 09:37:41 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state STARTED 2025-09-20 09:37:41.315014 | orchestrator | 2025-09-20 09:37:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:44.358620 | orchestrator | 2025-09-20 09:37:44.358748 | orchestrator | 2025-09-20 09:37:44.358764 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-20 09:37:44.358776 | orchestrator | 2025-09-20 09:37:44.358788 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-20 09:37:44.358832 | orchestrator | Saturday 20 September 2025 09:36:32 +0000 (0:00:00.739) 0:00:00.739 **** 2025-09-20 09:37:44.359059 | orchestrator | ok: [testbed-manager] => { 2025-09-20 09:37:44.359077 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-20 09:37:44.359090 | orchestrator | } 2025-09-20 09:37:44.359101 | orchestrator | 2025-09-20 09:37:44.359112 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-20 09:37:44.359123 | orchestrator | Saturday 20 September 2025 09:36:32 +0000 (0:00:00.340) 0:00:01.079 **** 2025-09-20 09:37:44.359134 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.359145 | orchestrator | 2025-09-20 09:37:44.359156 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-20 09:37:44.359167 | orchestrator | Saturday 20 September 2025 09:36:33 +0000 (0:00:01.535) 0:00:02.615 **** 2025-09-20 09:37:44.359178 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-20 09:37:44.359189 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-20 09:37:44.359199 | orchestrator | 2025-09-20 09:37:44.359210 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-20 09:37:44.359221 | orchestrator | Saturday 20 September 2025 09:36:35 +0000 (0:00:01.242) 0:00:03.857 **** 2025-09-20 09:37:44.359231 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.359242 | orchestrator | 2025-09-20 09:37:44.359253 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-20 09:37:44.359263 | orchestrator | Saturday 20 September 2025 09:36:38 +0000 (0:00:03.289) 0:00:07.147 **** 2025-09-20 09:37:44.359274 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.359285 | orchestrator | 2025-09-20 09:37:44.359295 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-20 09:37:44.359337 | orchestrator | Saturday 20 September 2025 09:36:40 +0000 (0:00:02.397) 0:00:09.545 **** 2025-09-20 09:37:44.359349 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-20 09:37:44.359359 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.359370 | orchestrator | 2025-09-20 09:37:44.359381 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-20 09:37:44.359391 | orchestrator | Saturday 20 September 2025 09:37:07 +0000 (0:00:26.819) 0:00:36.365 **** 2025-09-20 09:37:44.359402 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.359413 | orchestrator | 2025-09-20 09:37:44.359423 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:37:44.359434 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.359447 | orchestrator | 2025-09-20 09:37:44.359458 | orchestrator | 2025-09-20 09:37:44.359469 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:37:44.359480 | orchestrator | Saturday 20 September 2025 09:37:11 +0000 (0:00:03.787) 0:00:40.152 **** 2025-09-20 09:37:44.359490 | orchestrator | =============================================================================== 2025-09-20 09:37:44.359501 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.82s 2025-09-20 09:37:44.359512 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.79s 2025-09-20 09:37:44.359522 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.29s 2025-09-20 09:37:44.359533 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.40s 2025-09-20 09:37:44.359544 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.54s 2025-09-20 09:37:44.359554 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.24s 2025-09-20 09:37:44.359565 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2025-09-20 09:37:44.359575 | orchestrator | 2025-09-20 09:37:44.359586 | orchestrator | 2025-09-20 09:37:44.359596 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-20 09:37:44.359622 | orchestrator | 2025-09-20 09:37:44.359685 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-20 09:37:44.359699 | orchestrator | Saturday 20 September 2025 09:36:31 +0000 (0:00:00.638) 0:00:00.638 **** 2025-09-20 09:37:44.359712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-20 09:37:44.359726 | orchestrator | 2025-09-20 09:37:44.359739 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-20 09:37:44.359752 | orchestrator | Saturday 20 September 2025 09:36:31 +0000 (0:00:00.372) 0:00:01.011 **** 2025-09-20 09:37:44.359764 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-20 09:37:44.359776 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-20 09:37:44.359788 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-20 09:37:44.359800 | orchestrator | 2025-09-20 09:37:44.359813 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-20 09:37:44.359825 | orchestrator | Saturday 20 September 2025 09:36:32 +0000 (0:00:01.414) 0:00:02.425 **** 2025-09-20 09:37:44.359837 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.359849 | orchestrator | 2025-09-20 09:37:44.359861 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-20 09:37:44.359874 | orchestrator | Saturday 20 September 2025 09:36:34 +0000 (0:00:01.891) 0:00:04.316 **** 2025-09-20 09:37:44.359904 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-20 09:37:44.359917 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.359929 | orchestrator | 2025-09-20 09:37:44.359942 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-20 09:37:44.359954 | orchestrator | Saturday 20 September 2025 09:37:08 +0000 (0:00:33.569) 0:00:37.886 **** 2025-09-20 09:37:44.359966 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.359978 | orchestrator | 2025-09-20 09:37:44.359990 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-20 09:37:44.360002 | orchestrator | Saturday 20 September 2025 09:37:10 +0000 (0:00:02.537) 0:00:40.423 **** 2025-09-20 09:37:44.360014 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.360026 | orchestrator | 2025-09-20 09:37:44.360037 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-20 09:37:44.360048 | orchestrator | Saturday 20 September 2025 09:37:12 +0000 (0:00:01.269) 0:00:41.693 **** 2025-09-20 09:37:44.360059 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.360070 | orchestrator | 2025-09-20 09:37:44.360081 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-20 09:37:44.360091 | orchestrator | Saturday 20 September 2025 09:37:14 +0000 (0:00:02.127) 0:00:43.820 **** 2025-09-20 09:37:44.360102 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.360113 | orchestrator | 2025-09-20 09:37:44.360124 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-20 09:37:44.360135 | orchestrator | Saturday 20 September 2025 09:37:15 +0000 (0:00:00.928) 0:00:44.748 **** 2025-09-20 09:37:44.360146 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.360157 | orchestrator | 2025-09-20 09:37:44.360167 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-20 09:37:44.360178 | orchestrator | Saturday 20 September 2025 09:37:16 +0000 (0:00:00.782) 0:00:45.531 **** 2025-09-20 09:37:44.360189 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.360200 | orchestrator | 2025-09-20 09:37:44.360210 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:37:44.360221 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.360232 | orchestrator | 2025-09-20 09:37:44.360252 | orchestrator | 2025-09-20 09:37:44.360262 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:37:44.360273 | orchestrator | Saturday 20 September 2025 09:37:16 +0000 (0:00:00.341) 0:00:45.873 **** 2025-09-20 09:37:44.360284 | orchestrator | =============================================================================== 2025-09-20 09:37:44.360295 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.57s 2025-09-20 09:37:44.360324 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.54s 2025-09-20 09:37:44.360335 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.13s 2025-09-20 09:37:44.360346 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.89s 2025-09-20 09:37:44.360357 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.41s 2025-09-20 09:37:44.360367 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.27s 2025-09-20 09:37:44.360378 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.93s 2025-09-20 09:37:44.360389 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.78s 2025-09-20 09:37:44.360405 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.37s 2025-09-20 09:37:44.360417 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.34s 2025-09-20 09:37:44.360427 | orchestrator | 2025-09-20 09:37:44.360438 | orchestrator | 2025-09-20 09:37:44.360449 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-20 09:37:44.360459 | orchestrator | 2025-09-20 09:37:44.360470 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-20 09:37:44.360481 | orchestrator | Saturday 20 September 2025 09:36:48 +0000 (0:00:00.348) 0:00:00.348 **** 2025-09-20 09:37:44.360492 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.360502 | orchestrator | 2025-09-20 09:37:44.360513 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-20 09:37:44.360524 | orchestrator | Saturday 20 September 2025 09:36:49 +0000 (0:00:01.679) 0:00:02.027 **** 2025-09-20 09:37:44.360535 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-20 09:37:44.360546 | orchestrator | 2025-09-20 09:37:44.360557 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-20 09:37:44.360568 | orchestrator | Saturday 20 September 2025 09:36:50 +0000 (0:00:00.492) 0:00:02.520 **** 2025-09-20 09:37:44.360578 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.360589 | orchestrator | 2025-09-20 09:37:44.360599 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-20 09:37:44.360610 | orchestrator | Saturday 20 September 2025 09:36:51 +0000 (0:00:01.307) 0:00:03.827 **** 2025-09-20 09:37:44.360621 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-20 09:37:44.360632 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.360642 | orchestrator | 2025-09-20 09:37:44.360653 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-20 09:37:44.360664 | orchestrator | Saturday 20 September 2025 09:37:32 +0000 (0:00:41.049) 0:00:44.877 **** 2025-09-20 09:37:44.360675 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.360685 | orchestrator | 2025-09-20 09:37:44.360696 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:37:44.360707 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.360718 | orchestrator | 2025-09-20 09:37:44.360728 | orchestrator | 2025-09-20 09:37:44.360739 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:37:44.360757 | orchestrator | Saturday 20 September 2025 09:37:42 +0000 (0:00:09.985) 0:00:54.862 **** 2025-09-20 09:37:44.360769 | orchestrator | =============================================================================== 2025-09-20 09:37:44.360780 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 41.05s 2025-09-20 09:37:44.360797 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.99s 2025-09-20 09:37:44.360808 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.68s 2025-09-20 09:37:44.360819 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.31s 2025-09-20 09:37:44.360829 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.49s 2025-09-20 09:37:44.360840 | orchestrator | 2025-09-20 09:37:44 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:44.360851 | orchestrator | 2025-09-20 09:37:44 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:44.360862 | orchestrator | 2025-09-20 09:37:44 | INFO  | Task 627f0c06-fff7-4e35-9898-d8048ff2f02f is in state SUCCESS 2025-09-20 09:37:44.360873 | orchestrator | 2025-09-20 09:37:44 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:44.360883 | orchestrator | 2025-09-20 09:37:44.360894 | orchestrator | 2025-09-20 09:37:44 | INFO  | Task 165c5644-bd0a-4fdc-a04a-28a7dc4bc949 is in state SUCCESS 2025-09-20 09:37:44.360905 | orchestrator | 2025-09-20 09:37:44.360916 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:37:44.360927 | orchestrator | 2025-09-20 09:37:44.360938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:37:44.360948 | orchestrator | Saturday 20 September 2025 09:36:32 +0000 (0:00:01.003) 0:00:01.003 **** 2025-09-20 09:37:44.360959 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-20 09:37:44.360970 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-20 09:37:44.360980 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-20 09:37:44.360991 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-20 09:37:44.361001 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-20 09:37:44.361012 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-20 09:37:44.361023 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-20 09:37:44.361034 | orchestrator | 2025-09-20 09:37:44.361044 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-20 09:37:44.361055 | orchestrator | 2025-09-20 09:37:44.361065 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-20 09:37:44.361076 | orchestrator | Saturday 20 September 2025 09:36:34 +0000 (0:00:01.236) 0:00:02.240 **** 2025-09-20 09:37:44.361103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:37:44.361123 | orchestrator | 2025-09-20 09:37:44.361139 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-20 09:37:44.361150 | orchestrator | Saturday 20 September 2025 09:36:36 +0000 (0:00:02.759) 0:00:05.000 **** 2025-09-20 09:37:44.361161 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:37:44.361172 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:37:44.361182 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:37:44.361193 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.361204 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:37:44.361214 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:37:44.361225 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:37:44.361236 | orchestrator | 2025-09-20 09:37:44.361247 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-20 09:37:44.361257 | orchestrator | Saturday 20 September 2025 09:36:39 +0000 (0:00:02.537) 0:00:07.537 **** 2025-09-20 09:37:44.361268 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.361279 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:37:44.361289 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:37:44.361324 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:37:44.361336 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:37:44.361346 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:37:44.361357 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:37:44.361367 | orchestrator | 2025-09-20 09:37:44.361378 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-20 09:37:44.361389 | orchestrator | Saturday 20 September 2025 09:36:42 +0000 (0:00:02.758) 0:00:10.296 **** 2025-09-20 09:37:44.361400 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:37:44.361411 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.361421 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:37:44.361432 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:37:44.361442 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:37:44.361453 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:37:44.361463 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:37:44.361474 | orchestrator | 2025-09-20 09:37:44.361485 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-20 09:37:44.361496 | orchestrator | Saturday 20 September 2025 09:36:43 +0000 (0:00:01.749) 0:00:12.046 **** 2025-09-20 09:37:44.361506 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:37:44.361517 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:37:44.361528 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:37:44.361538 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:37:44.361549 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:37:44.361559 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:37:44.361570 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.361581 | orchestrator | 2025-09-20 09:37:44.361592 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-20 09:37:44.361608 | orchestrator | Saturday 20 September 2025 09:36:54 +0000 (0:00:10.159) 0:00:22.205 **** 2025-09-20 09:37:44.361620 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:37:44.361630 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:37:44.361641 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:37:44.361651 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:37:44.361662 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:37:44.361673 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:37:44.361683 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.361694 | orchestrator | 2025-09-20 09:37:44.361705 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-20 09:37:44.361715 | orchestrator | Saturday 20 September 2025 09:37:22 +0000 (0:00:28.440) 0:00:50.646 **** 2025-09-20 09:37:44.361728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:37:44.361741 | orchestrator | 2025-09-20 09:37:44.361752 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-20 09:37:44.361762 | orchestrator | Saturday 20 September 2025 09:37:24 +0000 (0:00:01.529) 0:00:52.176 **** 2025-09-20 09:37:44.361773 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-20 09:37:44.361785 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-20 09:37:44.361795 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-20 09:37:44.361806 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-20 09:37:44.361816 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-20 09:37:44.361827 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-20 09:37:44.361837 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-20 09:37:44.361848 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-20 09:37:44.361859 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-20 09:37:44.361870 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-20 09:37:44.361881 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-20 09:37:44.361899 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-20 09:37:44.361910 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-20 09:37:44.361920 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-20 09:37:44.361931 | orchestrator | 2025-09-20 09:37:44.361942 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-20 09:37:44.361953 | orchestrator | Saturday 20 September 2025 09:37:30 +0000 (0:00:06.185) 0:00:58.361 **** 2025-09-20 09:37:44.361964 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.361974 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:37:44.361985 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:37:44.361996 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:37:44.362007 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:37:44.362083 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:37:44.362098 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:37:44.362109 | orchestrator | 2025-09-20 09:37:44.362120 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-20 09:37:44.362131 | orchestrator | Saturday 20 September 2025 09:37:31 +0000 (0:00:01.336) 0:00:59.697 **** 2025-09-20 09:37:44.362142 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.362152 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:37:44.362163 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:37:44.362174 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:37:44.362190 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:37:44.362202 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:37:44.362213 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:37:44.362223 | orchestrator | 2025-09-20 09:37:44.362234 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-20 09:37:44.362245 | orchestrator | Saturday 20 September 2025 09:37:33 +0000 (0:00:01.917) 0:01:01.614 **** 2025-09-20 09:37:44.362256 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.362267 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:37:44.362277 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:37:44.362288 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:37:44.362299 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:37:44.362326 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:37:44.362337 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:37:44.362347 | orchestrator | 2025-09-20 09:37:44.362358 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-20 09:37:44.362369 | orchestrator | Saturday 20 September 2025 09:37:35 +0000 (0:00:01.526) 0:01:03.141 **** 2025-09-20 09:37:44.362380 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:37:44.362391 | orchestrator | ok: [testbed-manager] 2025-09-20 09:37:44.362401 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:37:44.362412 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:37:44.362423 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:37:44.362433 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:37:44.362444 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:37:44.362454 | orchestrator | 2025-09-20 09:37:44.362465 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-20 09:37:44.362476 | orchestrator | Saturday 20 September 2025 09:37:37 +0000 (0:00:02.330) 0:01:05.471 **** 2025-09-20 09:37:44.362487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-20 09:37:44.362500 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:37:44.362511 | orchestrator | 2025-09-20 09:37:44.362522 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-20 09:37:44.362533 | orchestrator | Saturday 20 September 2025 09:37:38 +0000 (0:00:01.175) 0:01:06.646 **** 2025-09-20 09:37:44.362544 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.362555 | orchestrator | 2025-09-20 09:37:44.362574 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-20 09:37:44.362593 | orchestrator | Saturday 20 September 2025 09:37:40 +0000 (0:00:01.716) 0:01:08.363 **** 2025-09-20 09:37:44.362604 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:37:44.362615 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:37:44.362626 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:37:44.362637 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:37:44.362648 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:37:44.362658 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:37:44.362669 | orchestrator | changed: [testbed-manager] 2025-09-20 09:37:44.362680 | orchestrator | 2025-09-20 09:37:44.362691 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:37:44.362702 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362713 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362724 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362735 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362746 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362757 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362768 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:37:44.362779 | orchestrator | 2025-09-20 09:37:44.362790 | orchestrator | 2025-09-20 09:37:44.362800 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:37:44.362811 | orchestrator | Saturday 20 September 2025 09:37:43 +0000 (0:00:03.477) 0:01:11.842 **** 2025-09-20 09:37:44.362822 | orchestrator | =============================================================================== 2025-09-20 09:37:44.362833 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 28.44s 2025-09-20 09:37:44.362844 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.16s 2025-09-20 09:37:44.362855 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.19s 2025-09-20 09:37:44.362865 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.48s 2025-09-20 09:37:44.362876 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.76s 2025-09-20 09:37:44.362887 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.76s 2025-09-20 09:37:44.362898 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.54s 2025-09-20 09:37:44.362909 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.33s 2025-09-20 09:37:44.362925 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.92s 2025-09-20 09:37:44.362936 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.75s 2025-09-20 09:37:44.362947 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.72s 2025-09-20 09:37:44.362957 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.53s 2025-09-20 09:37:44.362968 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.53s 2025-09-20 09:37:44.362979 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.34s 2025-09-20 09:37:44.362990 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.24s 2025-09-20 09:37:44.363015 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.18s 2025-09-20 09:37:44.363026 | orchestrator | 2025-09-20 09:37:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:47.409432 | orchestrator | 2025-09-20 09:37:47 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:47.410332 | orchestrator | 2025-09-20 09:37:47 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:47.412188 | orchestrator | 2025-09-20 09:37:47 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:47.412417 | orchestrator | 2025-09-20 09:37:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:50.474513 | orchestrator | 2025-09-20 09:37:50 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:50.474642 | orchestrator | 2025-09-20 09:37:50 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:50.474657 | orchestrator | 2025-09-20 09:37:50 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:50.474669 | orchestrator | 2025-09-20 09:37:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:53.514460 | orchestrator | 2025-09-20 09:37:53 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:53.516806 | orchestrator | 2025-09-20 09:37:53 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:53.518865 | orchestrator | 2025-09-20 09:37:53 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:53.519269 | orchestrator | 2025-09-20 09:37:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:56.552937 | orchestrator | 2025-09-20 09:37:56 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:56.553935 | orchestrator | 2025-09-20 09:37:56 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:56.555744 | orchestrator | 2025-09-20 09:37:56 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:56.555770 | orchestrator | 2025-09-20 09:37:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:37:59.601907 | orchestrator | 2025-09-20 09:37:59 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:37:59.605993 | orchestrator | 2025-09-20 09:37:59 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:37:59.608178 | orchestrator | 2025-09-20 09:37:59 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:37:59.608619 | orchestrator | 2025-09-20 09:37:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:02.677284 | orchestrator | 2025-09-20 09:38:02 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:02.678491 | orchestrator | 2025-09-20 09:38:02 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:02.681610 | orchestrator | 2025-09-20 09:38:02 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:02.681635 | orchestrator | 2025-09-20 09:38:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:05.757701 | orchestrator | 2025-09-20 09:38:05 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:05.762201 | orchestrator | 2025-09-20 09:38:05 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:05.764160 | orchestrator | 2025-09-20 09:38:05 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:05.765140 | orchestrator | 2025-09-20 09:38:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:08.813583 | orchestrator | 2025-09-20 09:38:08 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:08.814577 | orchestrator | 2025-09-20 09:38:08 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:08.815289 | orchestrator | 2025-09-20 09:38:08 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:08.815631 | orchestrator | 2025-09-20 09:38:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:11.865627 | orchestrator | 2025-09-20 09:38:11 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:11.865843 | orchestrator | 2025-09-20 09:38:11 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:11.868977 | orchestrator | 2025-09-20 09:38:11 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:11.869074 | orchestrator | 2025-09-20 09:38:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:14.935737 | orchestrator | 2025-09-20 09:38:14 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:14.937875 | orchestrator | 2025-09-20 09:38:14 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:14.941567 | orchestrator | 2025-09-20 09:38:14 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:14.941594 | orchestrator | 2025-09-20 09:38:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:18.003877 | orchestrator | 2025-09-20 09:38:18 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:18.006396 | orchestrator | 2025-09-20 09:38:18 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:18.009269 | orchestrator | 2025-09-20 09:38:18 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:18.009297 | orchestrator | 2025-09-20 09:38:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:21.050798 | orchestrator | 2025-09-20 09:38:21 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:21.055330 | orchestrator | 2025-09-20 09:38:21 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:21.055382 | orchestrator | 2025-09-20 09:38:21 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:21.055395 | orchestrator | 2025-09-20 09:38:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:24.093909 | orchestrator | 2025-09-20 09:38:24 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:24.094463 | orchestrator | 2025-09-20 09:38:24 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:24.095221 | orchestrator | 2025-09-20 09:38:24 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:24.095248 | orchestrator | 2025-09-20 09:38:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:27.139741 | orchestrator | 2025-09-20 09:38:27 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:27.146236 | orchestrator | 2025-09-20 09:38:27 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:27.148148 | orchestrator | 2025-09-20 09:38:27 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:27.148407 | orchestrator | 2025-09-20 09:38:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:30.191149 | orchestrator | 2025-09-20 09:38:30 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:30.191242 | orchestrator | 2025-09-20 09:38:30 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:30.191256 | orchestrator | 2025-09-20 09:38:30 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:30.191268 | orchestrator | 2025-09-20 09:38:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:33.237521 | orchestrator | 2025-09-20 09:38:33 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:33.240607 | orchestrator | 2025-09-20 09:38:33 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:33.242681 | orchestrator | 2025-09-20 09:38:33 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:33.242789 | orchestrator | 2025-09-20 09:38:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:36.273071 | orchestrator | 2025-09-20 09:38:36 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:36.274635 | orchestrator | 2025-09-20 09:38:36 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:36.276547 | orchestrator | 2025-09-20 09:38:36 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:36.276740 | orchestrator | 2025-09-20 09:38:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:39.315859 | orchestrator | 2025-09-20 09:38:39 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:39.317617 | orchestrator | 2025-09-20 09:38:39 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:39.319936 | orchestrator | 2025-09-20 09:38:39 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:39.319966 | orchestrator | 2025-09-20 09:38:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:42.361671 | orchestrator | 2025-09-20 09:38:42 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:42.362818 | orchestrator | 2025-09-20 09:38:42 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:42.364451 | orchestrator | 2025-09-20 09:38:42 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state STARTED 2025-09-20 09:38:42.364491 | orchestrator | 2025-09-20 09:38:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:45.414403 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:38:45.415357 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:45.415424 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:38:45.417158 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state STARTED 2025-09-20 09:38:45.417642 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:45.421421 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task 51e99ea9-a1f8-4cbc-9816-969eb47de2aa is in state SUCCESS 2025-09-20 09:38:45.428901 | orchestrator | 2025-09-20 09:38:45.428985 | orchestrator | 2025-09-20 09:38:45.429000 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-20 09:38:45.429011 | orchestrator | 2025-09-20 09:38:45.429021 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-20 09:38:45.429050 | orchestrator | Saturday 20 September 2025 09:36:23 +0000 (0:00:00.263) 0:00:00.264 **** 2025-09-20 09:38:45.429062 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:38:45.429073 | orchestrator | 2025-09-20 09:38:45.429083 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-20 09:38:45.429092 | orchestrator | Saturday 20 September 2025 09:36:25 +0000 (0:00:01.378) 0:00:01.642 **** 2025-09-20 09:38:45.429102 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429112 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429121 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429131 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429140 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429150 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429159 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429169 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429178 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429189 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429198 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429207 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429217 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-20 09:38:45.429227 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429236 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429246 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429255 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-20 09:38:45.429265 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429281 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429291 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429300 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-20 09:38:45.429310 | orchestrator | 2025-09-20 09:38:45.429320 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-20 09:38:45.429329 | orchestrator | Saturday 20 September 2025 09:36:29 +0000 (0:00:04.186) 0:00:05.829 **** 2025-09-20 09:38:45.429339 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:38:45.429350 | orchestrator | 2025-09-20 09:38:45.429359 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-20 09:38:45.429396 | orchestrator | Saturday 20 September 2025 09:36:30 +0000 (0:00:01.111) 0:00:06.940 **** 2025-09-20 09:38:45.429410 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429513 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.429542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429719 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.429733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.431078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.431128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.431142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.431153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.431165 | orchestrator | 2025-09-20 09:38:45.431178 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-20 09:38:45.431191 | orchestrator | Saturday 20 September 2025 09:36:35 +0000 (0:00:05.172) 0:00:12.113 **** 2025-09-20 09:38:45.431225 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431241 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431276 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431289 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:38:45.431302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431422 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:38:45.431433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431483 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:38:45.431494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431606 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:38:45.431617 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:38:45.431628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431675 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:38:45.431686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431728 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:38:45.431739 | orchestrator | 2025-09-20 09:38:45.431750 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-20 09:38:45.431761 | orchestrator | Saturday 20 September 2025 09:36:37 +0000 (0:00:01.925) 0:00:14.038 **** 2025-09-20 09:38:45.431772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431829 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431848 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431860 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:38:45.431889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431924 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:38:45.431942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.431971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.431994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.432012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.432024 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:38:45.432034 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:38:45.432045 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:38:45.432056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.432075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.432087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.432098 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:38:45.432116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-20 09:38:45.432128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.432139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.432150 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:38:45.432161 | orchestrator | 2025-09-20 09:38:45.432172 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-20 09:38:45.432183 | orchestrator | Saturday 20 September 2025 09:36:40 +0000 (0:00:02.425) 0:00:16.464 **** 2025-09-20 09:38:45.432194 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:38:45.432205 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:38:45.432215 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:38:45.432226 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:38:45.432237 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:38:45.432253 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:38:45.432265 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:38:45.432276 | orchestrator | 2025-09-20 09:38:45.432287 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-20 09:38:45.432297 | orchestrator | Saturday 20 September 2025 09:36:41 +0000 (0:00:01.335) 0:00:17.800 **** 2025-09-20 09:38:45.432308 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:38:45.432318 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:38:45.432329 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:38:45.432339 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:38:45.432357 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:38:45.432388 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:38:45.432399 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:38:45.432410 | orchestrator | 2025-09-20 09:38:45.432421 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-20 09:38:45.432487 | orchestrator | Saturday 20 September 2025 09:36:42 +0000 (0:00:01.641) 0:00:19.442 **** 2025-09-20 09:38:45.432499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432529 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.432634 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432651 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432775 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432790 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.432808 | orchestrator | 2025-09-20 09:38:45.432826 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-20 09:38:45.432845 | orchestrator | Saturday 20 September 2025 09:36:49 +0000 (0:00:06.527) 0:00:25.969 **** 2025-09-20 09:38:45.432863 | orchestrator | [WARNING]: Skipped 2025-09-20 09:38:45.432883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-20 09:38:45.432910 | orchestrator | to this access issue: 2025-09-20 09:38:45.432927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-20 09:38:45.432945 | orchestrator | directory 2025-09-20 09:38:45.432964 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:38:45.432982 | orchestrator | 2025-09-20 09:38:45.432994 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-20 09:38:45.433004 | orchestrator | Saturday 20 September 2025 09:36:50 +0000 (0:00:01.152) 0:00:27.121 **** 2025-09-20 09:38:45.433015 | orchestrator | [WARNING]: Skipped 2025-09-20 09:38:45.433026 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-20 09:38:45.433044 | orchestrator | to this access issue: 2025-09-20 09:38:45.433055 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-20 09:38:45.433066 | orchestrator | directory 2025-09-20 09:38:45.433077 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:38:45.433087 | orchestrator | 2025-09-20 09:38:45.433098 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-20 09:38:45.433109 | orchestrator | Saturday 20 September 2025 09:36:51 +0000 (0:00:01.111) 0:00:28.232 **** 2025-09-20 09:38:45.433119 | orchestrator | [WARNING]: Skipped 2025-09-20 09:38:45.433130 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-20 09:38:45.433141 | orchestrator | to this access issue: 2025-09-20 09:38:45.433152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-20 09:38:45.433162 | orchestrator | directory 2025-09-20 09:38:45.433173 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:38:45.433183 | orchestrator | 2025-09-20 09:38:45.433194 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-20 09:38:45.433205 | orchestrator | Saturday 20 September 2025 09:36:52 +0000 (0:00:00.723) 0:00:28.956 **** 2025-09-20 09:38:45.433215 | orchestrator | [WARNING]: Skipped 2025-09-20 09:38:45.433226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-20 09:38:45.433236 | orchestrator | to this access issue: 2025-09-20 09:38:45.433247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-20 09:38:45.433258 | orchestrator | directory 2025-09-20 09:38:45.433268 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:38:45.433279 | orchestrator | 2025-09-20 09:38:45.433289 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-20 09:38:45.433300 | orchestrator | Saturday 20 September 2025 09:36:53 +0000 (0:00:00.859) 0:00:29.815 **** 2025-09-20 09:38:45.433311 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.433321 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.433332 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.433342 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.433353 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.433386 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.433398 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.433408 | orchestrator | 2025-09-20 09:38:45.433419 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-20 09:38:45.433430 | orchestrator | Saturday 20 September 2025 09:36:59 +0000 (0:00:05.863) 0:00:35.678 **** 2025-09-20 09:38:45.433440 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433452 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433473 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433490 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433509 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433519 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-20 09:38:45.433530 | orchestrator | 2025-09-20 09:38:45.433541 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-20 09:38:45.433551 | orchestrator | Saturday 20 September 2025 09:37:03 +0000 (0:00:04.343) 0:00:40.022 **** 2025-09-20 09:38:45.433562 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.433573 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.433583 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.433594 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.433604 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.433615 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.433625 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.433636 | orchestrator | 2025-09-20 09:38:45.433646 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-20 09:38:45.433657 | orchestrator | Saturday 20 September 2025 09:37:06 +0000 (0:00:03.039) 0:00:43.062 **** 2025-09-20 09:38:45.433668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433699 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433722 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433757 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433768 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433809 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433822 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433834 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433852 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433864 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433875 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433911 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.433923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:38:45.433935 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433953 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.433964 | orchestrator | 2025-09-20 09:38:45.433975 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-20 09:38:45.433985 | orchestrator | Saturday 20 September 2025 09:37:09 +0000 (0:00:03.030) 0:00:46.092 **** 2025-09-20 09:38:45.433996 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434079 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434095 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434106 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434116 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-20 09:38:45.434127 | orchestrator | 2025-09-20 09:38:45.434138 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-20 09:38:45.434149 | orchestrator | Saturday 20 September 2025 09:37:12 +0000 (0:00:02.844) 0:00:48.937 **** 2025-09-20 09:38:45.434159 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434181 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434191 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434202 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434213 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434223 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-20 09:38:45.434234 | orchestrator | 2025-09-20 09:38:45.434245 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-20 09:38:45.434255 | orchestrator | Saturday 20 September 2025 09:37:14 +0000 (0:00:01.998) 0:00:50.936 **** 2025-09-20 09:38:45.434266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434285 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434355 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-20 09:38:45.434490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434530 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:38:45.434635 | orchestrator | 2025-09-20 09:38:45.434646 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-20 09:38:45.434657 | orchestrator | Saturday 20 September 2025 09:37:17 +0000 (0:00:03.083) 0:00:54.019 **** 2025-09-20 09:38:45.434668 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.434683 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.434694 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.434705 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.434716 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.434726 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.434737 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.434747 | orchestrator | 2025-09-20 09:38:45.434758 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-20 09:38:45.434769 | orchestrator | Saturday 20 September 2025 09:37:18 +0000 (0:00:01.372) 0:00:55.392 **** 2025-09-20 09:38:45.434779 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.434789 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.434800 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.434811 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.434821 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.434832 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.434842 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.434853 | orchestrator | 2025-09-20 09:38:45.434864 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.434874 | orchestrator | Saturday 20 September 2025 09:37:19 +0000 (0:00:01.055) 0:00:56.448 **** 2025-09-20 09:38:45.434885 | orchestrator | 2025-09-20 09:38:45.434895 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.434906 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.061) 0:00:56.510 **** 2025-09-20 09:38:45.434924 | orchestrator | 2025-09-20 09:38:45.434934 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.434945 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.059) 0:00:56.569 **** 2025-09-20 09:38:45.434956 | orchestrator | 2025-09-20 09:38:45.434967 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.434977 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.059) 0:00:56.629 **** 2025-09-20 09:38:45.434988 | orchestrator | 2025-09-20 09:38:45.434998 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.435009 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.222) 0:00:56.852 **** 2025-09-20 09:38:45.435020 | orchestrator | 2025-09-20 09:38:45.435030 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.435041 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.104) 0:00:56.956 **** 2025-09-20 09:38:45.435051 | orchestrator | 2025-09-20 09:38:45.435062 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-20 09:38:45.435073 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.070) 0:00:57.026 **** 2025-09-20 09:38:45.435083 | orchestrator | 2025-09-20 09:38:45.435094 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-20 09:38:45.435111 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:00.097) 0:00:57.124 **** 2025-09-20 09:38:45.435122 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.435133 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.435143 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.435154 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.435165 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.435175 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.435186 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.435196 | orchestrator | 2025-09-20 09:38:45.435207 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-20 09:38:45.435218 | orchestrator | Saturday 20 September 2025 09:37:54 +0000 (0:00:34.228) 0:01:31.353 **** 2025-09-20 09:38:45.435229 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.435239 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.435250 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.435261 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.435271 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.435281 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.435292 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.435302 | orchestrator | 2025-09-20 09:38:45.435313 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-20 09:38:45.435324 | orchestrator | Saturday 20 September 2025 09:38:31 +0000 (0:00:36.564) 0:02:07.917 **** 2025-09-20 09:38:45.435334 | orchestrator | ok: [testbed-manager] 2025-09-20 09:38:45.435345 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:38:45.435356 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:38:45.435382 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:38:45.435393 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:38:45.435403 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:38:45.435414 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:38:45.435424 | orchestrator | 2025-09-20 09:38:45.435435 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-20 09:38:45.435446 | orchestrator | Saturday 20 September 2025 09:38:33 +0000 (0:00:02.211) 0:02:10.129 **** 2025-09-20 09:38:45.435456 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:38:45.435467 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:38:45.435478 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:38:45.435488 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:38:45.435499 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:38:45.435509 | orchestrator | changed: [testbed-manager] 2025-09-20 09:38:45.435520 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:38:45.435530 | orchestrator | 2025-09-20 09:38:45.435548 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:38:45.435560 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435572 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435583 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435598 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435609 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435620 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435631 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-20 09:38:45.435641 | orchestrator | 2025-09-20 09:38:45.435652 | orchestrator | 2025-09-20 09:38:45.435663 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:38:45.435673 | orchestrator | Saturday 20 September 2025 09:38:43 +0000 (0:00:09.882) 0:02:20.011 **** 2025-09-20 09:38:45.435684 | orchestrator | =============================================================================== 2025-09-20 09:38:45.435695 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.56s 2025-09-20 09:38:45.435705 | orchestrator | common : Restart fluentd container ------------------------------------- 34.23s 2025-09-20 09:38:45.435716 | orchestrator | common : Restart cron container ----------------------------------------- 9.88s 2025-09-20 09:38:45.435727 | orchestrator | common : Copying over config.json files for services -------------------- 6.53s 2025-09-20 09:38:45.435737 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.86s 2025-09-20 09:38:45.435748 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.17s 2025-09-20 09:38:45.435759 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.34s 2025-09-20 09:38:45.435769 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.19s 2025-09-20 09:38:45.435780 | orchestrator | common : Check common containers ---------------------------------------- 3.08s 2025-09-20 09:38:45.435790 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.04s 2025-09-20 09:38:45.435801 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.03s 2025-09-20 09:38:45.435812 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.84s 2025-09-20 09:38:45.435822 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.43s 2025-09-20 09:38:45.435833 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.21s 2025-09-20 09:38:45.435849 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.00s 2025-09-20 09:38:45.435861 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.93s 2025-09-20 09:38:45.435872 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.64s 2025-09-20 09:38:45.435882 | orchestrator | common : include_tasks -------------------------------------------------- 1.38s 2025-09-20 09:38:45.435893 | orchestrator | common : Creating log volume -------------------------------------------- 1.37s 2025-09-20 09:38:45.435903 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.34s 2025-09-20 09:38:45.435914 | orchestrator | 2025-09-20 09:38:45 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:38:45.435932 | orchestrator | 2025-09-20 09:38:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:48.460808 | orchestrator | 2025-09-20 09:38:48 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:38:48.460955 | orchestrator | 2025-09-20 09:38:48 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:48.461202 | orchestrator | 2025-09-20 09:38:48 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:38:48.464798 | orchestrator | 2025-09-20 09:38:48 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state STARTED 2025-09-20 09:38:48.465229 | orchestrator | 2025-09-20 09:38:48 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:48.465996 | orchestrator | 2025-09-20 09:38:48 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:38:48.466070 | orchestrator | 2025-09-20 09:38:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:51.488685 | orchestrator | 2025-09-20 09:38:51 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:38:51.488818 | orchestrator | 2025-09-20 09:38:51 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:51.489250 | orchestrator | 2025-09-20 09:38:51 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:38:51.489729 | orchestrator | 2025-09-20 09:38:51 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state STARTED 2025-09-20 09:38:51.490605 | orchestrator | 2025-09-20 09:38:51 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:51.491073 | orchestrator | 2025-09-20 09:38:51 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:38:51.491096 | orchestrator | 2025-09-20 09:38:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:54.510612 | orchestrator | 2025-09-20 09:38:54 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:38:54.510728 | orchestrator | 2025-09-20 09:38:54 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:54.511225 | orchestrator | 2025-09-20 09:38:54 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:38:54.512274 | orchestrator | 2025-09-20 09:38:54 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state STARTED 2025-09-20 09:38:54.512883 | orchestrator | 2025-09-20 09:38:54 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:54.513547 | orchestrator | 2025-09-20 09:38:54 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:38:54.513568 | orchestrator | 2025-09-20 09:38:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:38:57.541122 | orchestrator | 2025-09-20 09:38:57 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:38:57.542098 | orchestrator | 2025-09-20 09:38:57 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:38:57.542897 | orchestrator | 2025-09-20 09:38:57 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:38:57.544022 | orchestrator | 2025-09-20 09:38:57 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state STARTED 2025-09-20 09:38:57.545346 | orchestrator | 2025-09-20 09:38:57 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:38:57.546227 | orchestrator | 2025-09-20 09:38:57 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:38:57.546359 | orchestrator | 2025-09-20 09:38:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:00.580938 | orchestrator | 2025-09-20 09:39:00 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:00.581163 | orchestrator | 2025-09-20 09:39:00 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:00.581913 | orchestrator | 2025-09-20 09:39:00 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:39:00.582707 | orchestrator | 2025-09-20 09:39:00 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state STARTED 2025-09-20 09:39:00.584567 | orchestrator | 2025-09-20 09:39:00 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:00.585367 | orchestrator | 2025-09-20 09:39:00 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:00.585423 | orchestrator | 2025-09-20 09:39:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:03.632909 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:03.633145 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:03.634540 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:03.635222 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:39:03.635874 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task 7d4b71e4-6d58-4240-88f5-9344c1265972 is in state SUCCESS 2025-09-20 09:39:03.636519 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:03.637298 | orchestrator | 2025-09-20 09:39:03 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:03.637324 | orchestrator | 2025-09-20 09:39:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:06.681688 | orchestrator | 2025-09-20 09:39:06 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:06.681919 | orchestrator | 2025-09-20 09:39:06 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:06.682678 | orchestrator | 2025-09-20 09:39:06 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:06.683823 | orchestrator | 2025-09-20 09:39:06 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state STARTED 2025-09-20 09:39:06.683867 | orchestrator | 2025-09-20 09:39:06 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:06.686092 | orchestrator | 2025-09-20 09:39:06 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:06.686119 | orchestrator | 2025-09-20 09:39:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:09.723706 | orchestrator | 2025-09-20 09:39:09 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:09.726338 | orchestrator | 2025-09-20 09:39:09 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:09.726374 | orchestrator | 2025-09-20 09:39:09 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:09.726415 | orchestrator | 2025-09-20 09:39:09 | INFO  | Task 8b144bd3-d54a-4285-b4da-4b2c619b6917 is in state SUCCESS 2025-09-20 09:39:09.727260 | orchestrator | 2025-09-20 09:39:09.727294 | orchestrator | 2025-09-20 09:39:09.727306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:39:09.727345 | orchestrator | 2025-09-20 09:39:09.727357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:39:09.727368 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-20 09:39:09.727379 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:39:09.727432 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:39:09.727443 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:39:09.727454 | orchestrator | 2025-09-20 09:39:09.727465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:39:09.727475 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.412) 0:00:00.671 **** 2025-09-20 09:39:09.727487 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-20 09:39:09.727498 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-20 09:39:09.727509 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-20 09:39:09.727520 | orchestrator | 2025-09-20 09:39:09.727531 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-20 09:39:09.727541 | orchestrator | 2025-09-20 09:39:09.727552 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-20 09:39:09.727563 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:00.383) 0:00:01.054 **** 2025-09-20 09:39:09.727573 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:39:09.727584 | orchestrator | 2025-09-20 09:39:09.727595 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-20 09:39:09.727607 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:00.501) 0:00:01.556 **** 2025-09-20 09:39:09.727618 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-20 09:39:09.727629 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-20 09:39:09.727639 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-20 09:39:09.727650 | orchestrator | 2025-09-20 09:39:09.727661 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-20 09:39:09.727671 | orchestrator | Saturday 20 September 2025 09:38:53 +0000 (0:00:00.814) 0:00:02.371 **** 2025-09-20 09:39:09.727682 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-20 09:39:09.727693 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-20 09:39:09.727703 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-20 09:39:09.727714 | orchestrator | 2025-09-20 09:39:09.727725 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-20 09:39:09.727735 | orchestrator | Saturday 20 September 2025 09:38:55 +0000 (0:00:01.970) 0:00:04.342 **** 2025-09-20 09:39:09.727746 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:39:09.727757 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:39:09.727767 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:39:09.727778 | orchestrator | 2025-09-20 09:39:09.727789 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-20 09:39:09.727800 | orchestrator | Saturday 20 September 2025 09:38:57 +0000 (0:00:02.232) 0:00:06.574 **** 2025-09-20 09:39:09.727810 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:39:09.727821 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:39:09.727831 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:39:09.727842 | orchestrator | 2025-09-20 09:39:09.727852 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:39:09.727867 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:39:09.727881 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:39:09.727894 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:39:09.727915 | orchestrator | 2025-09-20 09:39:09.727928 | orchestrator | 2025-09-20 09:39:09.727941 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:39:09.727953 | orchestrator | Saturday 20 September 2025 09:39:00 +0000 (0:00:02.556) 0:00:09.130 **** 2025-09-20 09:39:09.727965 | orchestrator | =============================================================================== 2025-09-20 09:39:09.727978 | orchestrator | memcached : Restart memcached container --------------------------------- 2.56s 2025-09-20 09:39:09.727990 | orchestrator | memcached : Check memcached container ----------------------------------- 2.23s 2025-09-20 09:39:09.728002 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.97s 2025-09-20 09:39:09.728014 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.82s 2025-09-20 09:39:09.728043 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2025-09-20 09:39:09.728055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-09-20 09:39:09.728068 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-09-20 09:39:09.728080 | orchestrator | 2025-09-20 09:39:09.728092 | orchestrator | 2025-09-20 09:39:09.728104 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:39:09.728116 | orchestrator | 2025-09-20 09:39:09.728127 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:39:09.728139 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.285) 0:00:00.285 **** 2025-09-20 09:39:09.728152 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:39:09.728164 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:39:09.728176 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:39:09.728188 | orchestrator | 2025-09-20 09:39:09.728201 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:39:09.728226 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.467) 0:00:00.752 **** 2025-09-20 09:39:09.728238 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-20 09:39:09.728249 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-20 09:39:09.728260 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-20 09:39:09.728271 | orchestrator | 2025-09-20 09:39:09.728282 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-20 09:39:09.728293 | orchestrator | 2025-09-20 09:39:09.728304 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-20 09:39:09.728314 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:00.435) 0:00:01.188 **** 2025-09-20 09:39:09.728325 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:39:09.728336 | orchestrator | 2025-09-20 09:39:09.728347 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-20 09:39:09.728358 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:00.661) 0:00:01.850 **** 2025-09-20 09:39:09.728371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728502 | orchestrator | 2025-09-20 09:39:09.728514 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-20 09:39:09.728693 | orchestrator | Saturday 20 September 2025 09:38:54 +0000 (0:00:01.277) 0:00:03.127 **** 2025-09-20 09:39:09.728706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728798 | orchestrator | 2025-09-20 09:39:09.728809 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-20 09:39:09.728820 | orchestrator | Saturday 20 September 2025 09:38:56 +0000 (0:00:02.651) 0:00:05.778 **** 2025-09-20 09:39:09.728831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.728945 | orchestrator | 2025-09-20 09:39:09.728970 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-20 09:39:09.729000 | orchestrator | Saturday 20 September 2025 09:38:59 +0000 (0:00:02.880) 0:00:08.658 **** 2025-09-20 09:39:09.729037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.729073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.729129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.729156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.729182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.729219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-20 09:39:09.729245 | orchestrator | 2025-09-20 09:39:09.729271 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-20 09:39:09.729296 | orchestrator | Saturday 20 September 2025 09:39:01 +0000 (0:00:02.030) 0:00:10.689 **** 2025-09-20 09:39:09.729320 | orchestrator | 2025-09-20 09:39:09.729345 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-20 09:39:09.729415 | orchestrator | Saturday 20 September 2025 09:39:01 +0000 (0:00:00.114) 0:00:10.803 **** 2025-09-20 09:39:09.729439 | orchestrator | 2025-09-20 09:39:09.729465 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-20 09:39:09.729491 | orchestrator | Saturday 20 September 2025 09:39:01 +0000 (0:00:00.067) 0:00:10.870 **** 2025-09-20 09:39:09.729515 | orchestrator | 2025-09-20 09:39:09.729539 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-20 09:39:09.729562 | orchestrator | Saturday 20 September 2025 09:39:01 +0000 (0:00:00.094) 0:00:10.965 **** 2025-09-20 09:39:09.729587 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:39:09.729630 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:39:09.729654 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:39:09.729678 | orchestrator | 2025-09-20 09:39:09.729702 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-20 09:39:09.729727 | orchestrator | Saturday 20 September 2025 09:39:04 +0000 (0:00:02.982) 0:00:13.948 **** 2025-09-20 09:39:09.729750 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:39:09.729773 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:39:09.729797 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:39:09.729817 | orchestrator | 2025-09-20 09:39:09.729833 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:39:09.729850 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:39:09.729870 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:39:09.729887 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:39:09.729903 | orchestrator | 2025-09-20 09:39:09.729920 | orchestrator | 2025-09-20 09:39:09.729938 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:39:09.729957 | orchestrator | Saturday 20 September 2025 09:39:08 +0000 (0:00:03.336) 0:00:17.284 **** 2025-09-20 09:39:09.729975 | orchestrator | =============================================================================== 2025-09-20 09:39:09.729994 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.34s 2025-09-20 09:39:09.730007 | orchestrator | redis : Restart redis container ----------------------------------------- 2.98s 2025-09-20 09:39:09.730072 | orchestrator | redis : Copying over redis config files --------------------------------- 2.88s 2025-09-20 09:39:09.730087 | orchestrator | redis : Copying over default config.json files -------------------------- 2.65s 2025-09-20 09:39:09.730098 | orchestrator | redis : Check redis containers ------------------------------------------ 2.03s 2025-09-20 09:39:09.730109 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.28s 2025-09-20 09:39:09.730119 | orchestrator | redis : include_tasks --------------------------------------------------- 0.66s 2025-09-20 09:39:09.730130 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-09-20 09:39:09.730141 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-20 09:39:09.730152 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.28s 2025-09-20 09:39:09.730163 | orchestrator | 2025-09-20 09:39:09 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:09.730174 | orchestrator | 2025-09-20 09:39:09 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:09.730185 | orchestrator | 2025-09-20 09:39:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:12.793516 | orchestrator | 2025-09-20 09:39:12 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:12.793892 | orchestrator | 2025-09-20 09:39:12 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:12.795075 | orchestrator | 2025-09-20 09:39:12 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:12.795248 | orchestrator | 2025-09-20 09:39:12 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:12.796375 | orchestrator | 2025-09-20 09:39:12 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:12.796439 | orchestrator | 2025-09-20 09:39:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:15.844796 | orchestrator | 2025-09-20 09:39:15 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:15.844935 | orchestrator | 2025-09-20 09:39:15 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:15.844952 | orchestrator | 2025-09-20 09:39:15 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:15.844963 | orchestrator | 2025-09-20 09:39:15 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:15.844974 | orchestrator | 2025-09-20 09:39:15 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:15.844985 | orchestrator | 2025-09-20 09:39:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:18.997903 | orchestrator | 2025-09-20 09:39:18 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:18.998000 | orchestrator | 2025-09-20 09:39:18 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:18.998014 | orchestrator | 2025-09-20 09:39:18 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:18.998072 | orchestrator | 2025-09-20 09:39:18 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:18.998084 | orchestrator | 2025-09-20 09:39:18 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:18.998095 | orchestrator | 2025-09-20 09:39:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:21.913003 | orchestrator | 2025-09-20 09:39:21 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:21.913126 | orchestrator | 2025-09-20 09:39:21 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:21.914790 | orchestrator | 2025-09-20 09:39:21 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:21.915613 | orchestrator | 2025-09-20 09:39:21 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:21.916646 | orchestrator | 2025-09-20 09:39:21 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:21.916752 | orchestrator | 2025-09-20 09:39:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:24.942828 | orchestrator | 2025-09-20 09:39:24 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:24.946559 | orchestrator | 2025-09-20 09:39:24 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:24.950183 | orchestrator | 2025-09-20 09:39:24 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:24.954294 | orchestrator | 2025-09-20 09:39:24 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:24.958647 | orchestrator | 2025-09-20 09:39:24 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:24.958675 | orchestrator | 2025-09-20 09:39:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:28.014626 | orchestrator | 2025-09-20 09:39:28 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:28.014728 | orchestrator | 2025-09-20 09:39:28 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:28.017246 | orchestrator | 2025-09-20 09:39:28 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:28.017277 | orchestrator | 2025-09-20 09:39:28 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:28.017289 | orchestrator | 2025-09-20 09:39:28 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:28.017328 | orchestrator | 2025-09-20 09:39:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:31.045956 | orchestrator | 2025-09-20 09:39:31 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:31.046238 | orchestrator | 2025-09-20 09:39:31 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:31.047093 | orchestrator | 2025-09-20 09:39:31 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:31.047800 | orchestrator | 2025-09-20 09:39:31 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:31.049160 | orchestrator | 2025-09-20 09:39:31 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:31.049201 | orchestrator | 2025-09-20 09:39:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:34.125063 | orchestrator | 2025-09-20 09:39:34 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:34.125164 | orchestrator | 2025-09-20 09:39:34 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:34.125178 | orchestrator | 2025-09-20 09:39:34 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:34.125189 | orchestrator | 2025-09-20 09:39:34 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:34.125199 | orchestrator | 2025-09-20 09:39:34 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:34.125209 | orchestrator | 2025-09-20 09:39:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:37.154251 | orchestrator | 2025-09-20 09:39:37 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:37.158932 | orchestrator | 2025-09-20 09:39:37 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:37.171683 | orchestrator | 2025-09-20 09:39:37 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:37.321867 | orchestrator | 2025-09-20 09:39:37 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:37.321944 | orchestrator | 2025-09-20 09:39:37 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:37.321958 | orchestrator | 2025-09-20 09:39:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:40.462407 | orchestrator | 2025-09-20 09:39:40 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:40.473643 | orchestrator | 2025-09-20 09:39:40 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:40.473693 | orchestrator | 2025-09-20 09:39:40 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:40.473705 | orchestrator | 2025-09-20 09:39:40 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:40.473716 | orchestrator | 2025-09-20 09:39:40 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:40.473728 | orchestrator | 2025-09-20 09:39:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:43.504918 | orchestrator | 2025-09-20 09:39:43 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:43.508488 | orchestrator | 2025-09-20 09:39:43 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:43.508990 | orchestrator | 2025-09-20 09:39:43 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:43.509903 | orchestrator | 2025-09-20 09:39:43 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:43.513611 | orchestrator | 2025-09-20 09:39:43 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:43.513635 | orchestrator | 2025-09-20 09:39:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:46.554778 | orchestrator | 2025-09-20 09:39:46 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:46.555589 | orchestrator | 2025-09-20 09:39:46 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:46.556940 | orchestrator | 2025-09-20 09:39:46 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:46.557987 | orchestrator | 2025-09-20 09:39:46 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:46.559088 | orchestrator | 2025-09-20 09:39:46 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:46.559118 | orchestrator | 2025-09-20 09:39:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:49.618584 | orchestrator | 2025-09-20 09:39:49 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:49.619359 | orchestrator | 2025-09-20 09:39:49 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:49.620698 | orchestrator | 2025-09-20 09:39:49 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:49.622369 | orchestrator | 2025-09-20 09:39:49 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:49.624235 | orchestrator | 2025-09-20 09:39:49 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:49.624263 | orchestrator | 2025-09-20 09:39:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:52.727171 | orchestrator | 2025-09-20 09:39:52 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:52.727583 | orchestrator | 2025-09-20 09:39:52 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:52.729198 | orchestrator | 2025-09-20 09:39:52 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:52.729409 | orchestrator | 2025-09-20 09:39:52 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:52.730237 | orchestrator | 2025-09-20 09:39:52 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:52.733450 | orchestrator | 2025-09-20 09:39:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:55.761615 | orchestrator | 2025-09-20 09:39:55 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:55.762253 | orchestrator | 2025-09-20 09:39:55 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:55.763030 | orchestrator | 2025-09-20 09:39:55 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:55.763847 | orchestrator | 2025-09-20 09:39:55 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:55.764585 | orchestrator | 2025-09-20 09:39:55 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:55.764619 | orchestrator | 2025-09-20 09:39:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:39:59.180979 | orchestrator | 2025-09-20 09:39:59 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:39:59.181347 | orchestrator | 2025-09-20 09:39:59 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state STARTED 2025-09-20 09:39:59.182117 | orchestrator | 2025-09-20 09:39:59 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:39:59.184120 | orchestrator | 2025-09-20 09:39:59 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:39:59.185555 | orchestrator | 2025-09-20 09:39:59 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:39:59.185650 | orchestrator | 2025-09-20 09:39:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:02.486257 | orchestrator | 2025-09-20 09:40:02 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:02.486376 | orchestrator | 2025-09-20 09:40:02 | INFO  | Task dc4bcf47-413c-4a61-9170-2d881b0e4223 is in state SUCCESS 2025-09-20 09:40:02.486391 | orchestrator | 2025-09-20 09:40:02 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:40:02.486403 | orchestrator | 2025-09-20 09:40:02 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:02.486415 | orchestrator | 2025-09-20 09:40:02 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:02.486494 | orchestrator | 2025-09-20 09:40:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:02.487251 | orchestrator | 2025-09-20 09:40:02.487338 | orchestrator | 2025-09-20 09:40:02.487352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:40:02.487364 | orchestrator | 2025-09-20 09:40:02.487376 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:40:02.487387 | orchestrator | Saturday 20 September 2025 09:38:50 +0000 (0:00:00.336) 0:00:00.336 **** 2025-09-20 09:40:02.487397 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:02.487409 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:02.487420 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:02.487430 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:02.487490 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:02.487509 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:02.487528 | orchestrator | 2025-09-20 09:40:02.487546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:40:02.487566 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.718) 0:00:01.055 **** 2025-09-20 09:40:02.487579 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 09:40:02.487590 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 09:40:02.487601 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 09:40:02.487612 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 09:40:02.487623 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 09:40:02.487634 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-20 09:40:02.487645 | orchestrator | 2025-09-20 09:40:02.487671 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-20 09:40:02.487682 | orchestrator | 2025-09-20 09:40:02.487693 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-20 09:40:02.487704 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:01.040) 0:00:02.095 **** 2025-09-20 09:40:02.487716 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:40:02.487728 | orchestrator | 2025-09-20 09:40:02.487739 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-20 09:40:02.487750 | orchestrator | Saturday 20 September 2025 09:38:54 +0000 (0:00:01.593) 0:00:03.688 **** 2025-09-20 09:40:02.487784 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-20 09:40:02.487796 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-20 09:40:02.487807 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-20 09:40:02.487818 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-20 09:40:02.487829 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-20 09:40:02.487840 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-20 09:40:02.487850 | orchestrator | 2025-09-20 09:40:02.487861 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-20 09:40:02.487872 | orchestrator | Saturday 20 September 2025 09:38:55 +0000 (0:00:01.381) 0:00:05.070 **** 2025-09-20 09:40:02.487883 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-20 09:40:02.487894 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-20 09:40:02.487905 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-20 09:40:02.487916 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-20 09:40:02.487926 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-20 09:40:02.487937 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-20 09:40:02.487948 | orchestrator | 2025-09-20 09:40:02.487959 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-20 09:40:02.487970 | orchestrator | Saturday 20 September 2025 09:38:57 +0000 (0:00:02.541) 0:00:07.611 **** 2025-09-20 09:40:02.487981 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-20 09:40:02.487992 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:02.488003 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-20 09:40:02.488014 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:02.488025 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-20 09:40:02.488035 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:02.488046 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-20 09:40:02.488057 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:02.488068 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-20 09:40:02.488079 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:02.488089 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-20 09:40:02.488100 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:02.488111 | orchestrator | 2025-09-20 09:40:02.488122 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-20 09:40:02.488133 | orchestrator | Saturday 20 September 2025 09:38:59 +0000 (0:00:01.465) 0:00:09.077 **** 2025-09-20 09:40:02.488144 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:02.488155 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:02.488166 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:02.488177 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:02.488187 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:02.488198 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:02.488209 | orchestrator | 2025-09-20 09:40:02.488220 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-20 09:40:02.488231 | orchestrator | Saturday 20 September 2025 09:39:00 +0000 (0:00:01.101) 0:00:10.179 **** 2025-09-20 09:40:02.488265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488293 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488470 | orchestrator | 2025-09-20 09:40:02.488482 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-20 09:40:02.488503 | orchestrator | Saturday 20 September 2025 09:39:02 +0000 (0:00:02.021) 0:00:12.201 **** 2025-09-20 09:40:02.488514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488526 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488693 | orchestrator | 2025-09-20 09:40:02.488704 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-20 09:40:02.488716 | orchestrator | Saturday 20 September 2025 09:39:05 +0000 (0:00:02.964) 0:00:15.166 **** 2025-09-20 09:40:02.488727 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:02.488738 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:02.488749 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:02.488760 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:02.488770 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:02.488781 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:02.488792 | orchestrator | 2025-09-20 09:40:02.488803 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-20 09:40:02.488814 | orchestrator | Saturday 20 September 2025 09:39:07 +0000 (0:00:01.470) 0:00:16.637 **** 2025-09-20 09:40:02.488829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.488997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.489016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.489033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-20 09:40:02.489045 | orchestrator | 2025-09-20 09:40:02.489056 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 09:40:02.489067 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:02.444) 0:00:19.081 **** 2025-09-20 09:40:02.489078 | orchestrator | 2025-09-20 09:40:02.489089 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 09:40:02.489100 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.404) 0:00:19.486 **** 2025-09-20 09:40:02.489111 | orchestrator | 2025-09-20 09:40:02.489122 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 09:40:02.489132 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.103) 0:00:19.589 **** 2025-09-20 09:40:02.489143 | orchestrator | 2025-09-20 09:40:02.489154 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 09:40:02.489165 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.247) 0:00:19.836 **** 2025-09-20 09:40:02.489176 | orchestrator | 2025-09-20 09:40:02.489187 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 09:40:02.489197 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.222) 0:00:20.059 **** 2025-09-20 09:40:02.489208 | orchestrator | 2025-09-20 09:40:02.489219 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-20 09:40:02.489230 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.321) 0:00:20.380 **** 2025-09-20 09:40:02.489241 | orchestrator | 2025-09-20 09:40:02.489252 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-20 09:40:02.489262 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.208) 0:00:20.589 **** 2025-09-20 09:40:02.489273 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:02.489284 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:02.489302 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:02.489313 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:02.489324 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:02.489335 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:02.489346 | orchestrator | 2025-09-20 09:40:02.489357 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-20 09:40:02.489368 | orchestrator | Saturday 20 September 2025 09:39:21 +0000 (0:00:11.003) 0:00:31.592 **** 2025-09-20 09:40:02.489378 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:02.489389 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:02.489400 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:02.489411 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:02.489422 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:02.489460 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:02.489472 | orchestrator | 2025-09-20 09:40:02.489483 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-20 09:40:02.489494 | orchestrator | Saturday 20 September 2025 09:39:23 +0000 (0:00:01.415) 0:00:33.008 **** 2025-09-20 09:40:02.489505 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:02.489516 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:02.489527 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:02.489538 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:02.489549 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:02.489559 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:02.489570 | orchestrator | 2025-09-20 09:40:02.489581 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-20 09:40:02.489592 | orchestrator | Saturday 20 September 2025 09:39:35 +0000 (0:00:11.827) 0:00:44.836 **** 2025-09-20 09:40:02.489603 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-20 09:40:02.489614 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-20 09:40:02.489625 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-20 09:40:02.489636 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-20 09:40:02.489647 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-20 09:40:02.489664 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-20 09:40:02.489675 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-20 09:40:02.489687 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-20 09:40:02.489698 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-20 09:40:02.489709 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-20 09:40:02.489719 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-20 09:40:02.489730 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-20 09:40:02.489741 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 09:40:02.489752 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 09:40:02.489763 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 09:40:02.489778 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 09:40:02.489799 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 09:40:02.489810 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-20 09:40:02.489821 | orchestrator | 2025-09-20 09:40:02.489832 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-20 09:40:02.489843 | orchestrator | Saturday 20 September 2025 09:39:43 +0000 (0:00:08.300) 0:00:53.136 **** 2025-09-20 09:40:02.489855 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-20 09:40:02.489865 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-20 09:40:02.489876 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:02.489887 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-20 09:40:02.489898 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:02.489909 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:02.489920 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-20 09:40:02.489931 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-20 09:40:02.489942 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-20 09:40:02.489952 | orchestrator | 2025-09-20 09:40:02.489963 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-20 09:40:02.489974 | orchestrator | Saturday 20 September 2025 09:39:47 +0000 (0:00:03.534) 0:00:56.671 **** 2025-09-20 09:40:02.489985 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-20 09:40:02.489996 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:02.490007 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-20 09:40:02.490063 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-20 09:40:02.490077 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:02.490088 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:02.490099 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-20 09:40:02.490110 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-20 09:40:02.490121 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-20 09:40:02.490132 | orchestrator | 2025-09-20 09:40:02.490143 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-20 09:40:02.490154 | orchestrator | Saturday 20 September 2025 09:39:51 +0000 (0:00:04.267) 0:01:00.938 **** 2025-09-20 09:40:02.490164 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:02.490175 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:02.490186 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:02.490197 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:02.490208 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:02.490218 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:02.490229 | orchestrator | 2025-09-20 09:40:02.490240 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:40:02.490251 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:40:02.490263 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:40:02.490274 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:40:02.490285 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:40:02.490296 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:40:02.490320 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:40:02.490332 | orchestrator | 2025-09-20 09:40:02.490344 | orchestrator | 2025-09-20 09:40:02.490355 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:40:02.490366 | orchestrator | Saturday 20 September 2025 09:40:00 +0000 (0:00:09.084) 0:01:10.023 **** 2025-09-20 09:40:02.490376 | orchestrator | =============================================================================== 2025-09-20 09:40:02.490387 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.91s 2025-09-20 09:40:02.490398 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.00s 2025-09-20 09:40:02.490409 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.30s 2025-09-20 09:40:02.490419 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.27s 2025-09-20 09:40:02.490430 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.53s 2025-09-20 09:40:02.490493 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.96s 2025-09-20 09:40:02.490505 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.54s 2025-09-20 09:40:02.490515 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.44s 2025-09-20 09:40:02.490526 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.02s 2025-09-20 09:40:02.490542 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.59s 2025-09-20 09:40:02.490554 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.51s 2025-09-20 09:40:02.490564 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.47s 2025-09-20 09:40:02.490575 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.47s 2025-09-20 09:40:02.490586 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.42s 2025-09-20 09:40:02.490596 | orchestrator | module-load : Load modules ---------------------------------------------- 1.38s 2025-09-20 09:40:02.490607 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.10s 2025-09-20 09:40:02.490618 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2025-09-20 09:40:02.490629 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-09-20 09:40:05.531747 | orchestrator | 2025-09-20 09:40:05 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:05.532064 | orchestrator | 2025-09-20 09:40:05 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:40:05.532611 | orchestrator | 2025-09-20 09:40:05 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:05.533148 | orchestrator | 2025-09-20 09:40:05 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:05.543333 | orchestrator | 2025-09-20 09:40:05 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:05.543365 | orchestrator | 2025-09-20 09:40:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:08.676670 | orchestrator | 2025-09-20 09:40:08 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:08.676745 | orchestrator | 2025-09-20 09:40:08 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state STARTED 2025-09-20 09:40:08.676758 | orchestrator | 2025-09-20 09:40:08 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:08.678581 | orchestrator | 2025-09-20 09:40:08 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:08.679017 | orchestrator | 2025-09-20 09:40:08 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:08.679064 | orchestrator | 2025-09-20 09:40:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:11.715028 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:11.715783 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task bf5803ff-e773-4d69-b1a6-1dd0a652a4d4 is in state SUCCESS 2025-09-20 09:40:11.720568 | orchestrator | 2025-09-20 09:40:11.720611 | orchestrator | 2025-09-20 09:40:11.720623 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-20 09:40:11.720635 | orchestrator | 2025-09-20 09:40:11.720646 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-20 09:40:11.720657 | orchestrator | Saturday 20 September 2025 09:36:24 +0000 (0:00:00.235) 0:00:00.235 **** 2025-09-20 09:40:11.720668 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.720680 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.720691 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.720701 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.720712 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.720723 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.720733 | orchestrator | 2025-09-20 09:40:11.720744 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-20 09:40:11.720755 | orchestrator | Saturday 20 September 2025 09:36:24 +0000 (0:00:00.665) 0:00:00.901 **** 2025-09-20 09:40:11.720785 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.720798 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.720809 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.720820 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.720830 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.720841 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.720852 | orchestrator | 2025-09-20 09:40:11.720863 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-20 09:40:11.720874 | orchestrator | Saturday 20 September 2025 09:36:25 +0000 (0:00:00.598) 0:00:01.499 **** 2025-09-20 09:40:11.720885 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.720895 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.720906 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.720917 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.720928 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.720938 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.720949 | orchestrator | 2025-09-20 09:40:11.720960 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-20 09:40:11.720971 | orchestrator | Saturday 20 September 2025 09:36:26 +0000 (0:00:00.718) 0:00:02.218 **** 2025-09-20 09:40:11.720981 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.720992 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.721003 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.721013 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.721024 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.721035 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.721046 | orchestrator | 2025-09-20 09:40:11.721056 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-20 09:40:11.721067 | orchestrator | Saturday 20 September 2025 09:36:28 +0000 (0:00:01.957) 0:00:04.176 **** 2025-09-20 09:40:11.721085 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.721096 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.721106 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.721117 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.721128 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.721139 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.721150 | orchestrator | 2025-09-20 09:40:11.721161 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-20 09:40:11.721172 | orchestrator | Saturday 20 September 2025 09:36:28 +0000 (0:00:00.885) 0:00:05.061 **** 2025-09-20 09:40:11.721203 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.721216 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.721228 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.721241 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.721254 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.721265 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.721278 | orchestrator | 2025-09-20 09:40:11.721290 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-20 09:40:11.721303 | orchestrator | Saturday 20 September 2025 09:36:30 +0000 (0:00:01.153) 0:00:06.214 **** 2025-09-20 09:40:11.721316 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.721329 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.721341 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.721353 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.721365 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.721377 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.721390 | orchestrator | 2025-09-20 09:40:11.721404 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-20 09:40:11.721416 | orchestrator | Saturday 20 September 2025 09:36:30 +0000 (0:00:00.553) 0:00:06.768 **** 2025-09-20 09:40:11.721428 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.721461 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.721483 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.721504 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.721523 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.721537 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.721549 | orchestrator | 2025-09-20 09:40:11.721560 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-20 09:40:11.721571 | orchestrator | Saturday 20 September 2025 09:36:31 +0000 (0:00:00.691) 0:00:07.459 **** 2025-09-20 09:40:11.721582 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 09:40:11.721594 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 09:40:11.721604 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.721615 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 09:40:11.721626 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 09:40:11.721637 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.721651 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 09:40:11.721670 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 09:40:11.721689 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.721701 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 09:40:11.721723 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 09:40:11.721734 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.721745 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 09:40:11.721756 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 09:40:11.721766 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.721777 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 09:40:11.721788 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 09:40:11.721798 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.721809 | orchestrator | 2025-09-20 09:40:11.721819 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-20 09:40:11.721830 | orchestrator | Saturday 20 September 2025 09:36:32 +0000 (0:00:00.823) 0:00:08.283 **** 2025-09-20 09:40:11.721840 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.721859 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.721870 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.721881 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.721891 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.721902 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.721912 | orchestrator | 2025-09-20 09:40:11.721923 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-20 09:40:11.721935 | orchestrator | Saturday 20 September 2025 09:36:34 +0000 (0:00:01.957) 0:00:10.241 **** 2025-09-20 09:40:11.721945 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.721956 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.721966 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.721977 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.721988 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.721998 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.722009 | orchestrator | 2025-09-20 09:40:11.722090 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-20 09:40:11.722101 | orchestrator | Saturday 20 September 2025 09:36:35 +0000 (0:00:00.920) 0:00:11.161 **** 2025-09-20 09:40:11.722112 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.722123 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.722134 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.722144 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.722155 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.722165 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.722176 | orchestrator | 2025-09-20 09:40:11.722187 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-20 09:40:11.722204 | orchestrator | Saturday 20 September 2025 09:36:40 +0000 (0:00:05.142) 0:00:16.303 **** 2025-09-20 09:40:11.722215 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.722225 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.722236 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.722247 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.722257 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.722268 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.722278 | orchestrator | 2025-09-20 09:40:11.722289 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-20 09:40:11.722300 | orchestrator | Saturday 20 September 2025 09:36:42 +0000 (0:00:01.875) 0:00:18.179 **** 2025-09-20 09:40:11.722311 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.722321 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.722332 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.722342 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.722353 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.722363 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.722374 | orchestrator | 2025-09-20 09:40:11.722385 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-20 09:40:11.722397 | orchestrator | Saturday 20 September 2025 09:36:44 +0000 (0:00:02.155) 0:00:20.335 **** 2025-09-20 09:40:11.722408 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.722418 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.722429 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.722439 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.722508 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.722520 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.722530 | orchestrator | 2025-09-20 09:40:11.722541 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-20 09:40:11.722552 | orchestrator | Saturday 20 September 2025 09:36:45 +0000 (0:00:01.775) 0:00:22.111 **** 2025-09-20 09:40:11.722562 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-20 09:40:11.722573 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-20 09:40:11.722584 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-20 09:40:11.722602 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-20 09:40:11.722613 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-20 09:40:11.722623 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-20 09:40:11.722634 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-20 09:40:11.722644 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-20 09:40:11.722654 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-20 09:40:11.722665 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-20 09:40:11.722675 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-20 09:40:11.722686 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-20 09:40:11.722697 | orchestrator | 2025-09-20 09:40:11.722707 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-20 09:40:11.722718 | orchestrator | Saturday 20 September 2025 09:36:48 +0000 (0:00:02.476) 0:00:24.587 **** 2025-09-20 09:40:11.722729 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.722739 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.722749 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.722760 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.722770 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.722781 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.722792 | orchestrator | 2025-09-20 09:40:11.722810 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-20 09:40:11.722822 | orchestrator | 2025-09-20 09:40:11.722833 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-20 09:40:11.722844 | orchestrator | Saturday 20 September 2025 09:36:49 +0000 (0:00:01.384) 0:00:25.972 **** 2025-09-20 09:40:11.722854 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.722865 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.722876 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.722886 | orchestrator | 2025-09-20 09:40:11.722897 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-20 09:40:11.722908 | orchestrator | Saturday 20 September 2025 09:36:50 +0000 (0:00:01.091) 0:00:27.063 **** 2025-09-20 09:40:11.722918 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.722929 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.722940 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.722950 | orchestrator | 2025-09-20 09:40:11.722961 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-20 09:40:11.722972 | orchestrator | Saturday 20 September 2025 09:36:52 +0000 (0:00:01.118) 0:00:28.182 **** 2025-09-20 09:40:11.722982 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.722993 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.723003 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.723014 | orchestrator | 2025-09-20 09:40:11.723024 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-20 09:40:11.723035 | orchestrator | Saturday 20 September 2025 09:36:53 +0000 (0:00:01.051) 0:00:29.233 **** 2025-09-20 09:40:11.723046 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.723056 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.723067 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.723078 | orchestrator | 2025-09-20 09:40:11.723088 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-20 09:40:11.723099 | orchestrator | Saturday 20 September 2025 09:36:54 +0000 (0:00:01.176) 0:00:30.410 **** 2025-09-20 09:40:11.723110 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.723121 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.723131 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.723142 | orchestrator | 2025-09-20 09:40:11.723153 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-20 09:40:11.723163 | orchestrator | Saturday 20 September 2025 09:36:54 +0000 (0:00:00.333) 0:00:30.743 **** 2025-09-20 09:40:11.723174 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.723184 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.723201 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.723211 | orchestrator | 2025-09-20 09:40:11.723222 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-20 09:40:11.723238 | orchestrator | Saturday 20 September 2025 09:36:55 +0000 (0:00:00.955) 0:00:31.699 **** 2025-09-20 09:40:11.723249 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.723260 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.723271 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.723281 | orchestrator | 2025-09-20 09:40:11.723292 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-20 09:40:11.723303 | orchestrator | Saturday 20 September 2025 09:36:57 +0000 (0:00:02.395) 0:00:34.094 **** 2025-09-20 09:40:11.723314 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:40:11.723324 | orchestrator | 2025-09-20 09:40:11.723335 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-20 09:40:11.723346 | orchestrator | Saturday 20 September 2025 09:36:58 +0000 (0:00:00.722) 0:00:34.817 **** 2025-09-20 09:40:11.723356 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.723367 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.723377 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.723388 | orchestrator | 2025-09-20 09:40:11.723399 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-20 09:40:11.723410 | orchestrator | Saturday 20 September 2025 09:37:01 +0000 (0:00:02.673) 0:00:37.491 **** 2025-09-20 09:40:11.723420 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.723431 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.723461 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.723477 | orchestrator | 2025-09-20 09:40:11.723488 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-20 09:40:11.723499 | orchestrator | Saturday 20 September 2025 09:37:02 +0000 (0:00:00.708) 0:00:38.200 **** 2025-09-20 09:40:11.723509 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.723520 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.723530 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.723541 | orchestrator | 2025-09-20 09:40:11.723552 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-20 09:40:11.723702 | orchestrator | Saturday 20 September 2025 09:37:03 +0000 (0:00:01.197) 0:00:39.397 **** 2025-09-20 09:40:11.723715 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.723726 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.723737 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.723748 | orchestrator | 2025-09-20 09:40:11.723759 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-20 09:40:11.723770 | orchestrator | Saturday 20 September 2025 09:37:05 +0000 (0:00:01.808) 0:00:41.206 **** 2025-09-20 09:40:11.723781 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.723792 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.723803 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.723814 | orchestrator | 2025-09-20 09:40:11.723824 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-20 09:40:11.723835 | orchestrator | Saturday 20 September 2025 09:37:05 +0000 (0:00:00.419) 0:00:41.625 **** 2025-09-20 09:40:11.723846 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.723857 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.723868 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.723879 | orchestrator | 2025-09-20 09:40:11.723890 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-20 09:40:11.723901 | orchestrator | Saturday 20 September 2025 09:37:05 +0000 (0:00:00.362) 0:00:41.988 **** 2025-09-20 09:40:11.723912 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.723922 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.723933 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.723944 | orchestrator | 2025-09-20 09:40:11.723973 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-20 09:40:11.723985 | orchestrator | Saturday 20 September 2025 09:37:08 +0000 (0:00:02.156) 0:00:44.145 **** 2025-09-20 09:40:11.723997 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-20 09:40:11.724008 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-20 09:40:11.724019 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-20 09:40:11.724031 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-20 09:40:11.724042 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-20 09:40:11.724053 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-20 09:40:11.724064 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-20 09:40:11.724075 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-20 09:40:11.724086 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-20 09:40:11.724097 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-20 09:40:11.724108 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-20 09:40:11.724124 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-20 09:40:11.724136 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-20 09:40:11.724147 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-20 09:40:11.724158 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-20 09:40:11.724169 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.724180 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.724191 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.724202 | orchestrator | 2025-09-20 09:40:11.724213 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-20 09:40:11.724224 | orchestrator | Saturday 20 September 2025 09:38:03 +0000 (0:00:55.229) 0:01:39.375 **** 2025-09-20 09:40:11.724234 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.724245 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.724256 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.724267 | orchestrator | 2025-09-20 09:40:11.724278 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-20 09:40:11.724289 | orchestrator | Saturday 20 September 2025 09:38:03 +0000 (0:00:00.634) 0:01:40.010 **** 2025-09-20 09:40:11.724300 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.724311 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.724322 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.724334 | orchestrator | 2025-09-20 09:40:11.724348 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-20 09:40:11.724360 | orchestrator | Saturday 20 September 2025 09:38:04 +0000 (0:00:01.120) 0:01:41.130 **** 2025-09-20 09:40:11.724378 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.724391 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.724403 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.724415 | orchestrator | 2025-09-20 09:40:11.724427 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-20 09:40:11.724439 | orchestrator | Saturday 20 September 2025 09:38:06 +0000 (0:00:01.390) 0:01:42.521 **** 2025-09-20 09:40:11.724474 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.724486 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.724498 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.724510 | orchestrator | 2025-09-20 09:40:11.724523 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-20 09:40:11.724535 | orchestrator | Saturday 20 September 2025 09:38:32 +0000 (0:00:25.908) 0:02:08.429 **** 2025-09-20 09:40:11.724548 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.724560 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.724572 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.724584 | orchestrator | 2025-09-20 09:40:11.724596 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-20 09:40:11.724608 | orchestrator | Saturday 20 September 2025 09:38:33 +0000 (0:00:00.755) 0:02:09.184 **** 2025-09-20 09:40:11.724622 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.724634 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.724646 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.724658 | orchestrator | 2025-09-20 09:40:11.724676 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-20 09:40:11.724687 | orchestrator | Saturday 20 September 2025 09:38:33 +0000 (0:00:00.585) 0:02:09.770 **** 2025-09-20 09:40:11.724698 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.724709 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.724720 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.724730 | orchestrator | 2025-09-20 09:40:11.724741 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-20 09:40:11.724752 | orchestrator | Saturday 20 September 2025 09:38:34 +0000 (0:00:00.583) 0:02:10.354 **** 2025-09-20 09:40:11.724763 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.724774 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.724784 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.724795 | orchestrator | 2025-09-20 09:40:11.724806 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-20 09:40:11.724817 | orchestrator | Saturday 20 September 2025 09:38:34 +0000 (0:00:00.759) 0:02:11.113 **** 2025-09-20 09:40:11.724827 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.724838 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.724849 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.724859 | orchestrator | 2025-09-20 09:40:11.724870 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-20 09:40:11.724881 | orchestrator | Saturday 20 September 2025 09:38:35 +0000 (0:00:00.274) 0:02:11.388 **** 2025-09-20 09:40:11.724891 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.724902 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.724913 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.724923 | orchestrator | 2025-09-20 09:40:11.724934 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-20 09:40:11.724945 | orchestrator | Saturday 20 September 2025 09:38:35 +0000 (0:00:00.635) 0:02:12.023 **** 2025-09-20 09:40:11.724956 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.724966 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.724977 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.724988 | orchestrator | 2025-09-20 09:40:11.724999 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-20 09:40:11.725009 | orchestrator | Saturday 20 September 2025 09:38:36 +0000 (0:00:00.611) 0:02:12.635 **** 2025-09-20 09:40:11.725020 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.725042 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.725053 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.725064 | orchestrator | 2025-09-20 09:40:11.725074 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-20 09:40:11.725085 | orchestrator | Saturday 20 September 2025 09:38:37 +0000 (0:00:01.141) 0:02:13.776 **** 2025-09-20 09:40:11.725096 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:40:11.725107 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:40:11.725117 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:40:11.725128 | orchestrator | 2025-09-20 09:40:11.725139 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-20 09:40:11.725150 | orchestrator | Saturday 20 September 2025 09:38:38 +0000 (0:00:00.800) 0:02:14.576 **** 2025-09-20 09:40:11.725161 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.725172 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.725182 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.725193 | orchestrator | 2025-09-20 09:40:11.725204 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-20 09:40:11.725214 | orchestrator | Saturday 20 September 2025 09:38:38 +0000 (0:00:00.296) 0:02:14.873 **** 2025-09-20 09:40:11.725225 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.725236 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.725247 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.725257 | orchestrator | 2025-09-20 09:40:11.725268 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-20 09:40:11.725279 | orchestrator | Saturday 20 September 2025 09:38:39 +0000 (0:00:00.283) 0:02:15.157 **** 2025-09-20 09:40:11.725290 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.725300 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.725311 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.725322 | orchestrator | 2025-09-20 09:40:11.725333 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-20 09:40:11.725344 | orchestrator | Saturday 20 September 2025 09:38:39 +0000 (0:00:00.909) 0:02:16.067 **** 2025-09-20 09:40:11.725354 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.725365 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.725376 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.725387 | orchestrator | 2025-09-20 09:40:11.725398 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-20 09:40:11.725409 | orchestrator | Saturday 20 September 2025 09:38:40 +0000 (0:00:00.682) 0:02:16.749 **** 2025-09-20 09:40:11.725420 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-20 09:40:11.725431 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-20 09:40:11.725458 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-20 09:40:11.725470 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-20 09:40:11.725481 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-20 09:40:11.725491 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-20 09:40:11.725502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-20 09:40:11.726139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-20 09:40:11.726162 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-20 09:40:11.726181 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-20 09:40:11.726193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-20 09:40:11.726212 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-20 09:40:11.726223 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-20 09:40:11.726234 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-20 09:40:11.726245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-20 09:40:11.726256 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-20 09:40:11.726266 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-20 09:40:11.726277 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-20 09:40:11.726288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-20 09:40:11.726298 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-20 09:40:11.726309 | orchestrator | 2025-09-20 09:40:11.726320 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-20 09:40:11.726331 | orchestrator | 2025-09-20 09:40:11.726341 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-20 09:40:11.726352 | orchestrator | Saturday 20 September 2025 09:38:43 +0000 (0:00:03.212) 0:02:19.962 **** 2025-09-20 09:40:11.726363 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.726374 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.726385 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.726396 | orchestrator | 2025-09-20 09:40:11.726406 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-20 09:40:11.726417 | orchestrator | Saturday 20 September 2025 09:38:44 +0000 (0:00:00.538) 0:02:20.501 **** 2025-09-20 09:40:11.726428 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.726438 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.726479 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.726490 | orchestrator | 2025-09-20 09:40:11.726501 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-20 09:40:11.726512 | orchestrator | Saturday 20 September 2025 09:38:45 +0000 (0:00:01.615) 0:02:22.116 **** 2025-09-20 09:40:11.726522 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.726533 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.726544 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.726554 | orchestrator | 2025-09-20 09:40:11.726565 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-20 09:40:11.726576 | orchestrator | Saturday 20 September 2025 09:38:46 +0000 (0:00:00.492) 0:02:22.609 **** 2025-09-20 09:40:11.726587 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:40:11.726598 | orchestrator | 2025-09-20 09:40:11.726609 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-20 09:40:11.726619 | orchestrator | Saturday 20 September 2025 09:38:47 +0000 (0:00:00.676) 0:02:23.285 **** 2025-09-20 09:40:11.726630 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.726641 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.726652 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.726662 | orchestrator | 2025-09-20 09:40:11.726673 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-20 09:40:11.726684 | orchestrator | Saturday 20 September 2025 09:38:47 +0000 (0:00:00.336) 0:02:23.622 **** 2025-09-20 09:40:11.726695 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.726706 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.726716 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.726727 | orchestrator | 2025-09-20 09:40:11.726738 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-20 09:40:11.726748 | orchestrator | Saturday 20 September 2025 09:38:47 +0000 (0:00:00.320) 0:02:23.942 **** 2025-09-20 09:40:11.726768 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.726779 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.726790 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.726800 | orchestrator | 2025-09-20 09:40:11.726812 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-20 09:40:11.726822 | orchestrator | Saturday 20 September 2025 09:38:48 +0000 (0:00:00.345) 0:02:24.288 **** 2025-09-20 09:40:11.726833 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.726844 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.726855 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.726866 | orchestrator | 2025-09-20 09:40:11.726876 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-20 09:40:11.726887 | orchestrator | Saturday 20 September 2025 09:38:48 +0000 (0:00:00.706) 0:02:24.995 **** 2025-09-20 09:40:11.726898 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.726909 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.726919 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.726930 | orchestrator | 2025-09-20 09:40:11.726945 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-20 09:40:11.726957 | orchestrator | Saturday 20 September 2025 09:38:50 +0000 (0:00:01.358) 0:02:26.353 **** 2025-09-20 09:40:11.726968 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.726978 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.726989 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.727000 | orchestrator | 2025-09-20 09:40:11.727011 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-20 09:40:11.727022 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:01.203) 0:02:27.557 **** 2025-09-20 09:40:11.727032 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:40:11.727043 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:40:11.727054 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:40:11.727064 | orchestrator | 2025-09-20 09:40:11.727081 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-20 09:40:11.727093 | orchestrator | 2025-09-20 09:40:11.727104 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-20 09:40:11.727114 | orchestrator | Saturday 20 September 2025 09:39:03 +0000 (0:00:12.328) 0:02:39.885 **** 2025-09-20 09:40:11.727125 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.727136 | orchestrator | 2025-09-20 09:40:11.727147 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-20 09:40:11.727157 | orchestrator | Saturday 20 September 2025 09:39:04 +0000 (0:00:00.885) 0:02:40.771 **** 2025-09-20 09:40:11.727168 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727179 | orchestrator | 2025-09-20 09:40:11.727190 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-20 09:40:11.727200 | orchestrator | Saturday 20 September 2025 09:39:04 +0000 (0:00:00.362) 0:02:41.133 **** 2025-09-20 09:40:11.727211 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-20 09:40:11.727222 | orchestrator | 2025-09-20 09:40:11.727233 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-20 09:40:11.727243 | orchestrator | Saturday 20 September 2025 09:39:05 +0000 (0:00:00.554) 0:02:41.688 **** 2025-09-20 09:40:11.727254 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727265 | orchestrator | 2025-09-20 09:40:11.727276 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-20 09:40:11.727286 | orchestrator | Saturday 20 September 2025 09:39:06 +0000 (0:00:00.932) 0:02:42.621 **** 2025-09-20 09:40:11.727297 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727308 | orchestrator | 2025-09-20 09:40:11.727319 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-20 09:40:11.727329 | orchestrator | Saturday 20 September 2025 09:39:07 +0000 (0:00:00.705) 0:02:43.326 **** 2025-09-20 09:40:11.727340 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 09:40:11.727357 | orchestrator | 2025-09-20 09:40:11.727368 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-20 09:40:11.727379 | orchestrator | Saturday 20 September 2025 09:39:08 +0000 (0:00:01.217) 0:02:44.544 **** 2025-09-20 09:40:11.727389 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 09:40:11.727400 | orchestrator | 2025-09-20 09:40:11.727411 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-20 09:40:11.727422 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.818) 0:02:45.362 **** 2025-09-20 09:40:11.727433 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727462 | orchestrator | 2025-09-20 09:40:11.727474 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-20 09:40:11.727485 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.496) 0:02:45.858 **** 2025-09-20 09:40:11.727496 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727506 | orchestrator | 2025-09-20 09:40:11.727517 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-20 09:40:11.727528 | orchestrator | 2025-09-20 09:40:11.727539 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-20 09:40:11.727550 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.580) 0:02:46.438 **** 2025-09-20 09:40:11.727560 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.727571 | orchestrator | 2025-09-20 09:40:11.727582 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-20 09:40:11.727593 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.142) 0:02:46.581 **** 2025-09-20 09:40:11.727603 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 09:40:11.727614 | orchestrator | 2025-09-20 09:40:11.727625 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-20 09:40:11.727636 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.217) 0:02:46.798 **** 2025-09-20 09:40:11.727646 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.727657 | orchestrator | 2025-09-20 09:40:11.727668 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-20 09:40:11.727679 | orchestrator | Saturday 20 September 2025 09:39:11 +0000 (0:00:01.038) 0:02:47.837 **** 2025-09-20 09:40:11.727689 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.727700 | orchestrator | 2025-09-20 09:40:11.727711 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-20 09:40:11.727722 | orchestrator | Saturday 20 September 2025 09:39:13 +0000 (0:00:01.349) 0:02:49.186 **** 2025-09-20 09:40:11.727732 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727743 | orchestrator | 2025-09-20 09:40:11.727754 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-20 09:40:11.727765 | orchestrator | Saturday 20 September 2025 09:39:13 +0000 (0:00:00.920) 0:02:50.107 **** 2025-09-20 09:40:11.727776 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.727786 | orchestrator | 2025-09-20 09:40:11.727797 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-20 09:40:11.727808 | orchestrator | Saturday 20 September 2025 09:39:14 +0000 (0:00:00.562) 0:02:50.669 **** 2025-09-20 09:40:11.727819 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727829 | orchestrator | 2025-09-20 09:40:11.727840 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-20 09:40:11.727855 | orchestrator | Saturday 20 September 2025 09:39:22 +0000 (0:00:07.898) 0:02:58.567 **** 2025-09-20 09:40:11.727866 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.727877 | orchestrator | 2025-09-20 09:40:11.727888 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-20 09:40:11.727899 | orchestrator | Saturday 20 September 2025 09:39:35 +0000 (0:00:13.439) 0:03:12.007 **** 2025-09-20 09:40:11.727910 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.727921 | orchestrator | 2025-09-20 09:40:11.727931 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-20 09:40:11.727948 | orchestrator | 2025-09-20 09:40:11.727959 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-20 09:40:11.727975 | orchestrator | Saturday 20 September 2025 09:39:36 +0000 (0:00:00.668) 0:03:12.676 **** 2025-09-20 09:40:11.727987 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.727998 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.728009 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.728019 | orchestrator | 2025-09-20 09:40:11.728030 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-20 09:40:11.728041 | orchestrator | Saturday 20 September 2025 09:39:36 +0000 (0:00:00.362) 0:03:13.038 **** 2025-09-20 09:40:11.728052 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728063 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.728073 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.728084 | orchestrator | 2025-09-20 09:40:11.728095 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-20 09:40:11.728106 | orchestrator | Saturday 20 September 2025 09:39:37 +0000 (0:00:00.517) 0:03:13.556 **** 2025-09-20 09:40:11.728116 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:40:11.728127 | orchestrator | 2025-09-20 09:40:11.728138 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-20 09:40:11.728148 | orchestrator | Saturday 20 September 2025 09:39:38 +0000 (0:00:01.183) 0:03:14.740 **** 2025-09-20 09:40:11.728159 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728170 | orchestrator | 2025-09-20 09:40:11.728181 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-20 09:40:11.728191 | orchestrator | Saturday 20 September 2025 09:39:38 +0000 (0:00:00.320) 0:03:15.060 **** 2025-09-20 09:40:11.728202 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728213 | orchestrator | 2025-09-20 09:40:11.728224 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-20 09:40:11.728234 | orchestrator | Saturday 20 September 2025 09:39:39 +0000 (0:00:00.226) 0:03:15.287 **** 2025-09-20 09:40:11.728245 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728256 | orchestrator | 2025-09-20 09:40:11.728267 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-20 09:40:11.728277 | orchestrator | Saturday 20 September 2025 09:39:39 +0000 (0:00:00.204) 0:03:15.492 **** 2025-09-20 09:40:11.728288 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728299 | orchestrator | 2025-09-20 09:40:11.728310 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-20 09:40:11.728321 | orchestrator | Saturday 20 September 2025 09:39:39 +0000 (0:00:00.225) 0:03:15.717 **** 2025-09-20 09:40:11.728331 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728342 | orchestrator | 2025-09-20 09:40:11.728353 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-20 09:40:11.728364 | orchestrator | Saturday 20 September 2025 09:39:39 +0000 (0:00:00.223) 0:03:15.940 **** 2025-09-20 09:40:11.728374 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728385 | orchestrator | 2025-09-20 09:40:11.728396 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-20 09:40:11.728407 | orchestrator | Saturday 20 September 2025 09:39:40 +0000 (0:00:00.205) 0:03:16.146 **** 2025-09-20 09:40:11.728417 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728428 | orchestrator | 2025-09-20 09:40:11.728439 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-20 09:40:11.728465 | orchestrator | Saturday 20 September 2025 09:39:40 +0000 (0:00:00.294) 0:03:16.440 **** 2025-09-20 09:40:11.728476 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728487 | orchestrator | 2025-09-20 09:40:11.728498 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-20 09:40:11.728509 | orchestrator | Saturday 20 September 2025 09:39:40 +0000 (0:00:00.286) 0:03:16.727 **** 2025-09-20 09:40:11.728526 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728536 | orchestrator | 2025-09-20 09:40:11.728547 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-20 09:40:11.728558 | orchestrator | Saturday 20 September 2025 09:39:40 +0000 (0:00:00.239) 0:03:16.967 **** 2025-09-20 09:40:11.728569 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-20 09:40:11.728580 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-20 09:40:11.728590 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728601 | orchestrator | 2025-09-20 09:40:11.728612 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-20 09:40:11.728623 | orchestrator | Saturday 20 September 2025 09:39:41 +0000 (0:00:00.837) 0:03:17.804 **** 2025-09-20 09:40:11.728634 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728644 | orchestrator | 2025-09-20 09:40:11.728655 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-20 09:40:11.728666 | orchestrator | Saturday 20 September 2025 09:39:41 +0000 (0:00:00.248) 0:03:18.053 **** 2025-09-20 09:40:11.728677 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728687 | orchestrator | 2025-09-20 09:40:11.728698 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-20 09:40:11.728709 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:00.240) 0:03:18.294 **** 2025-09-20 09:40:11.728720 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728730 | orchestrator | 2025-09-20 09:40:11.728741 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-20 09:40:11.728757 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:00.226) 0:03:18.520 **** 2025-09-20 09:40:11.728768 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728779 | orchestrator | 2025-09-20 09:40:11.728790 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-20 09:40:11.728800 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:00.211) 0:03:18.732 **** 2025-09-20 09:40:11.728811 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728822 | orchestrator | 2025-09-20 09:40:11.728833 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-20 09:40:11.728844 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:00.306) 0:03:19.039 **** 2025-09-20 09:40:11.728855 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728865 | orchestrator | 2025-09-20 09:40:11.728876 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-20 09:40:11.728893 | orchestrator | Saturday 20 September 2025 09:39:43 +0000 (0:00:00.341) 0:03:19.381 **** 2025-09-20 09:40:11.728904 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728915 | orchestrator | 2025-09-20 09:40:11.728926 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-20 09:40:11.728937 | orchestrator | Saturday 20 September 2025 09:39:43 +0000 (0:00:00.272) 0:03:19.653 **** 2025-09-20 09:40:11.728947 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.728958 | orchestrator | 2025-09-20 09:40:11.728969 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-20 09:40:11.728980 | orchestrator | Saturday 20 September 2025 09:39:43 +0000 (0:00:00.228) 0:03:19.882 **** 2025-09-20 09:40:11.728991 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729002 | orchestrator | 2025-09-20 09:40:11.729012 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-20 09:40:11.729023 | orchestrator | Saturday 20 September 2025 09:39:44 +0000 (0:00:00.278) 0:03:20.161 **** 2025-09-20 09:40:11.729034 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729045 | orchestrator | 2025-09-20 09:40:11.729056 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-20 09:40:11.729067 | orchestrator | Saturday 20 September 2025 09:39:44 +0000 (0:00:00.295) 0:03:20.457 **** 2025-09-20 09:40:11.729077 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729088 | orchestrator | 2025-09-20 09:40:11.729106 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-20 09:40:11.729117 | orchestrator | Saturday 20 September 2025 09:39:44 +0000 (0:00:00.236) 0:03:20.694 **** 2025-09-20 09:40:11.729128 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-20 09:40:11.729139 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-20 09:40:11.729149 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-20 09:40:11.729160 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-20 09:40:11.729171 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729182 | orchestrator | 2025-09-20 09:40:11.729192 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-20 09:40:11.729203 | orchestrator | Saturday 20 September 2025 09:39:45 +0000 (0:00:01.062) 0:03:21.757 **** 2025-09-20 09:40:11.729214 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729225 | orchestrator | 2025-09-20 09:40:11.729236 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-20 09:40:11.729247 | orchestrator | Saturday 20 September 2025 09:39:45 +0000 (0:00:00.247) 0:03:22.004 **** 2025-09-20 09:40:11.729257 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729268 | orchestrator | 2025-09-20 09:40:11.729279 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-20 09:40:11.729290 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:00.244) 0:03:22.248 **** 2025-09-20 09:40:11.729301 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729311 | orchestrator | 2025-09-20 09:40:11.729322 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-20 09:40:11.729333 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:00.221) 0:03:22.470 **** 2025-09-20 09:40:11.729344 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729354 | orchestrator | 2025-09-20 09:40:11.729365 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-20 09:40:11.729376 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:00.233) 0:03:22.703 **** 2025-09-20 09:40:11.729387 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-20 09:40:11.729397 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-20 09:40:11.729408 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729419 | orchestrator | 2025-09-20 09:40:11.729430 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-20 09:40:11.729456 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:00.348) 0:03:23.052 **** 2025-09-20 09:40:11.729468 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.729479 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.729490 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.729501 | orchestrator | 2025-09-20 09:40:11.729512 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-20 09:40:11.729522 | orchestrator | Saturday 20 September 2025 09:39:47 +0000 (0:00:00.320) 0:03:23.373 **** 2025-09-20 09:40:11.729533 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.729544 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.729555 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.729566 | orchestrator | 2025-09-20 09:40:11.729577 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-20 09:40:11.729587 | orchestrator | 2025-09-20 09:40:11.729598 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-20 09:40:11.729609 | orchestrator | Saturday 20 September 2025 09:39:48 +0000 (0:00:01.295) 0:03:24.668 **** 2025-09-20 09:40:11.729620 | orchestrator | ok: [testbed-manager] 2025-09-20 09:40:11.729630 | orchestrator | 2025-09-20 09:40:11.729641 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-20 09:40:11.729656 | orchestrator | Saturday 20 September 2025 09:39:48 +0000 (0:00:00.170) 0:03:24.838 **** 2025-09-20 09:40:11.729673 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-20 09:40:11.729684 | orchestrator | 2025-09-20 09:40:11.729695 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-20 09:40:11.729706 | orchestrator | Saturday 20 September 2025 09:39:48 +0000 (0:00:00.242) 0:03:25.081 **** 2025-09-20 09:40:11.729717 | orchestrator | changed: [testbed-manager] 2025-09-20 09:40:11.729727 | orchestrator | 2025-09-20 09:40:11.729738 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-20 09:40:11.729749 | orchestrator | 2025-09-20 09:40:11.729760 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-20 09:40:11.729777 | orchestrator | Saturday 20 September 2025 09:39:54 +0000 (0:00:05.685) 0:03:30.767 **** 2025-09-20 09:40:11.729788 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:40:11.729799 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:40:11.729810 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:40:11.729821 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:40:11.729832 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:40:11.729842 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:40:11.729853 | orchestrator | 2025-09-20 09:40:11.729864 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-20 09:40:11.729875 | orchestrator | Saturday 20 September 2025 09:39:55 +0000 (0:00:00.743) 0:03:31.510 **** 2025-09-20 09:40:11.729886 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-20 09:40:11.729897 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-20 09:40:11.729907 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-20 09:40:11.729918 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-20 09:40:11.729929 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-20 09:40:11.729940 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-20 09:40:11.729950 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-20 09:40:11.729961 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-20 09:40:11.729971 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-20 09:40:11.729982 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-20 09:40:11.729993 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-20 09:40:11.730004 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-20 09:40:11.730060 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-20 09:40:11.730075 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-20 09:40:11.730086 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-20 09:40:11.730097 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-20 09:40:11.730108 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-20 09:40:11.730119 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-20 09:40:11.730129 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-20 09:40:11.730140 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-20 09:40:11.730151 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-20 09:40:11.730162 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-20 09:40:11.730180 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-20 09:40:11.730191 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-20 09:40:11.730201 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-20 09:40:11.730212 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-20 09:40:11.730223 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-20 09:40:11.730234 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-20 09:40:11.730245 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-20 09:40:11.730256 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-20 09:40:11.730266 | orchestrator | 2025-09-20 09:40:11.730277 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-20 09:40:11.730288 | orchestrator | Saturday 20 September 2025 09:40:08 +0000 (0:00:13.048) 0:03:44.559 **** 2025-09-20 09:40:11.730299 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.730309 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.730320 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.730331 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.730341 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.730352 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.730363 | orchestrator | 2025-09-20 09:40:11.730378 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-20 09:40:11.730389 | orchestrator | Saturday 20 September 2025 09:40:09 +0000 (0:00:00.605) 0:03:45.164 **** 2025-09-20 09:40:11.730400 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:40:11.730411 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:40:11.730422 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:40:11.730432 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:40:11.730491 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:40:11.730504 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:40:11.730515 | orchestrator | 2025-09-20 09:40:11.730526 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:40:11.730544 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:40:11.730557 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-20 09:40:11.730569 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 09:40:11.730580 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 09:40:11.730591 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 09:40:11.730601 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 09:40:11.730612 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 09:40:11.730623 | orchestrator | 2025-09-20 09:40:11.730634 | orchestrator | 2025-09-20 09:40:11.730645 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:40:11.730655 | orchestrator | Saturday 20 September 2025 09:40:09 +0000 (0:00:00.387) 0:03:45.551 **** 2025-09-20 09:40:11.730666 | orchestrator | =============================================================================== 2025-09-20 09:40:11.730684 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.23s 2025-09-20 09:40:11.730695 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.91s 2025-09-20 09:40:11.730706 | orchestrator | kubectl : Install required packages ------------------------------------ 13.44s 2025-09-20 09:40:11.730716 | orchestrator | Manage labels ---------------------------------------------------------- 13.05s 2025-09-20 09:40:11.730727 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.33s 2025-09-20 09:40:11.730738 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.90s 2025-09-20 09:40:11.730748 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.69s 2025-09-20 09:40:11.730759 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.14s 2025-09-20 09:40:11.730770 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.21s 2025-09-20 09:40:11.730781 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.67s 2025-09-20 09:40:11.730792 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.48s 2025-09-20 09:40:11.730802 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 2.40s 2025-09-20 09:40:11.730813 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.16s 2025-09-20 09:40:11.730823 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.16s 2025-09-20 09:40:11.730834 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.96s 2025-09-20 09:40:11.730845 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.96s 2025-09-20 09:40:11.730856 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.88s 2025-09-20 09:40:11.730866 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.81s 2025-09-20 09:40:11.730877 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.78s 2025-09-20 09:40:11.730888 | orchestrator | k3s_agent : Check if system is PXE-booted ------------------------------- 1.62s 2025-09-20 09:40:11.730899 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task bad4e75a-2109-461b-b7ac-d0b8e5178a74 is in state STARTED 2025-09-20 09:40:11.730910 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task b6e9f4ec-b5af-4b54-8b37-0d5b6d1ed8eb is in state STARTED 2025-09-20 09:40:11.730921 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:11.730932 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:11.730947 | orchestrator | 2025-09-20 09:40:11 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:11.730958 | orchestrator | 2025-09-20 09:40:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:14.766542 | orchestrator | 2025-09-20 09:40:14 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:14.767605 | orchestrator | 2025-09-20 09:40:14 | INFO  | Task bad4e75a-2109-461b-b7ac-d0b8e5178a74 is in state STARTED 2025-09-20 09:40:14.769004 | orchestrator | 2025-09-20 09:40:14 | INFO  | Task b6e9f4ec-b5af-4b54-8b37-0d5b6d1ed8eb is in state STARTED 2025-09-20 09:40:14.770268 | orchestrator | 2025-09-20 09:40:14 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:14.771333 | orchestrator | 2025-09-20 09:40:14 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:14.775388 | orchestrator | 2025-09-20 09:40:14 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:14.775484 | orchestrator | 2025-09-20 09:40:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:17.819897 | orchestrator | 2025-09-20 09:40:17 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:17.819999 | orchestrator | 2025-09-20 09:40:17 | INFO  | Task bad4e75a-2109-461b-b7ac-d0b8e5178a74 is in state STARTED 2025-09-20 09:40:17.820013 | orchestrator | 2025-09-20 09:40:17 | INFO  | Task b6e9f4ec-b5af-4b54-8b37-0d5b6d1ed8eb is in state SUCCESS 2025-09-20 09:40:17.820025 | orchestrator | 2025-09-20 09:40:17 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:17.820036 | orchestrator | 2025-09-20 09:40:17 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:17.820047 | orchestrator | 2025-09-20 09:40:17 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:17.820058 | orchestrator | 2025-09-20 09:40:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:20.839430 | orchestrator | 2025-09-20 09:40:20 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:20.839582 | orchestrator | 2025-09-20 09:40:20 | INFO  | Task bad4e75a-2109-461b-b7ac-d0b8e5178a74 is in state SUCCESS 2025-09-20 09:40:20.841646 | orchestrator | 2025-09-20 09:40:20 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:20.842365 | orchestrator | 2025-09-20 09:40:20 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:20.843863 | orchestrator | 2025-09-20 09:40:20 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:20.843896 | orchestrator | 2025-09-20 09:40:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:23.901186 | orchestrator | 2025-09-20 09:40:23 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:23.901296 | orchestrator | 2025-09-20 09:40:23 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:23.901311 | orchestrator | 2025-09-20 09:40:23 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:23.901323 | orchestrator | 2025-09-20 09:40:23 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:23.901334 | orchestrator | 2025-09-20 09:40:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:26.931956 | orchestrator | 2025-09-20 09:40:26 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:26.932910 | orchestrator | 2025-09-20 09:40:26 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:26.933425 | orchestrator | 2025-09-20 09:40:26 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:26.934187 | orchestrator | 2025-09-20 09:40:26 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:26.934216 | orchestrator | 2025-09-20 09:40:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:29.969379 | orchestrator | 2025-09-20 09:40:29 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:29.970267 | orchestrator | 2025-09-20 09:40:29 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:29.972008 | orchestrator | 2025-09-20 09:40:29 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:29.973284 | orchestrator | 2025-09-20 09:40:29 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:29.973327 | orchestrator | 2025-09-20 09:40:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:33.014252 | orchestrator | 2025-09-20 09:40:33 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:33.014360 | orchestrator | 2025-09-20 09:40:33 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:33.014899 | orchestrator | 2025-09-20 09:40:33 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:33.015913 | orchestrator | 2025-09-20 09:40:33 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:33.015981 | orchestrator | 2025-09-20 09:40:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:36.060946 | orchestrator | 2025-09-20 09:40:36 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:36.061581 | orchestrator | 2025-09-20 09:40:36 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:36.065600 | orchestrator | 2025-09-20 09:40:36 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:36.067018 | orchestrator | 2025-09-20 09:40:36 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:36.067287 | orchestrator | 2025-09-20 09:40:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:39.114005 | orchestrator | 2025-09-20 09:40:39 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:39.114196 | orchestrator | 2025-09-20 09:40:39 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:39.114209 | orchestrator | 2025-09-20 09:40:39 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:39.114220 | orchestrator | 2025-09-20 09:40:39 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:39.114230 | orchestrator | 2025-09-20 09:40:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:42.150892 | orchestrator | 2025-09-20 09:40:42 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:42.150997 | orchestrator | 2025-09-20 09:40:42 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:42.151333 | orchestrator | 2025-09-20 09:40:42 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:42.153767 | orchestrator | 2025-09-20 09:40:42 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:42.153882 | orchestrator | 2025-09-20 09:40:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:45.184970 | orchestrator | 2025-09-20 09:40:45 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:45.185908 | orchestrator | 2025-09-20 09:40:45 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:45.186931 | orchestrator | 2025-09-20 09:40:45 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:45.187450 | orchestrator | 2025-09-20 09:40:45 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:45.187720 | orchestrator | 2025-09-20 09:40:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:48.234941 | orchestrator | 2025-09-20 09:40:48 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:48.235365 | orchestrator | 2025-09-20 09:40:48 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:48.236711 | orchestrator | 2025-09-20 09:40:48 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:48.237349 | orchestrator | 2025-09-20 09:40:48 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:48.237376 | orchestrator | 2025-09-20 09:40:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:51.273371 | orchestrator | 2025-09-20 09:40:51 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:51.274176 | orchestrator | 2025-09-20 09:40:51 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:51.278141 | orchestrator | 2025-09-20 09:40:51 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:51.281186 | orchestrator | 2025-09-20 09:40:51 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:51.281291 | orchestrator | 2025-09-20 09:40:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:54.339666 | orchestrator | 2025-09-20 09:40:54 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:54.344302 | orchestrator | 2025-09-20 09:40:54 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:54.346987 | orchestrator | 2025-09-20 09:40:54 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:54.347570 | orchestrator | 2025-09-20 09:40:54 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:54.347594 | orchestrator | 2025-09-20 09:40:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:40:57.382731 | orchestrator | 2025-09-20 09:40:57 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:40:57.384008 | orchestrator | 2025-09-20 09:40:57 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:40:57.385676 | orchestrator | 2025-09-20 09:40:57 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:40:57.387002 | orchestrator | 2025-09-20 09:40:57 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:40:57.387140 | orchestrator | 2025-09-20 09:40:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:00.436365 | orchestrator | 2025-09-20 09:41:00 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:00.437313 | orchestrator | 2025-09-20 09:41:00 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:00.439449 | orchestrator | 2025-09-20 09:41:00 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:00.441171 | orchestrator | 2025-09-20 09:41:00 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:00.442687 | orchestrator | 2025-09-20 09:41:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:03.472760 | orchestrator | 2025-09-20 09:41:03 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:03.473483 | orchestrator | 2025-09-20 09:41:03 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:03.474769 | orchestrator | 2025-09-20 09:41:03 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:03.478197 | orchestrator | 2025-09-20 09:41:03 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:03.478231 | orchestrator | 2025-09-20 09:41:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:06.512057 | orchestrator | 2025-09-20 09:41:06 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:06.512614 | orchestrator | 2025-09-20 09:41:06 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:06.513970 | orchestrator | 2025-09-20 09:41:06 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:06.514831 | orchestrator | 2025-09-20 09:41:06 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:06.514857 | orchestrator | 2025-09-20 09:41:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:09.548784 | orchestrator | 2025-09-20 09:41:09 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:09.549167 | orchestrator | 2025-09-20 09:41:09 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:09.552080 | orchestrator | 2025-09-20 09:41:09 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:09.552926 | orchestrator | 2025-09-20 09:41:09 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:09.552950 | orchestrator | 2025-09-20 09:41:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:12.594352 | orchestrator | 2025-09-20 09:41:12 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:12.595143 | orchestrator | 2025-09-20 09:41:12 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:12.596115 | orchestrator | 2025-09-20 09:41:12 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:12.597452 | orchestrator | 2025-09-20 09:41:12 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:12.597474 | orchestrator | 2025-09-20 09:41:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:15.651494 | orchestrator | 2025-09-20 09:41:15 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:15.651893 | orchestrator | 2025-09-20 09:41:15 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:15.652984 | orchestrator | 2025-09-20 09:41:15 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:15.654175 | orchestrator | 2025-09-20 09:41:15 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:15.654197 | orchestrator | 2025-09-20 09:41:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:18.687330 | orchestrator | 2025-09-20 09:41:18 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:18.687571 | orchestrator | 2025-09-20 09:41:18 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:18.688476 | orchestrator | 2025-09-20 09:41:18 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:18.689252 | orchestrator | 2025-09-20 09:41:18 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:18.689276 | orchestrator | 2025-09-20 09:41:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:21.732905 | orchestrator | 2025-09-20 09:41:21 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:21.733615 | orchestrator | 2025-09-20 09:41:21 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:21.734755 | orchestrator | 2025-09-20 09:41:21 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:21.735574 | orchestrator | 2025-09-20 09:41:21 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:21.735604 | orchestrator | 2025-09-20 09:41:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:24.782657 | orchestrator | 2025-09-20 09:41:24 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:24.782760 | orchestrator | 2025-09-20 09:41:24 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:24.783209 | orchestrator | 2025-09-20 09:41:24 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:24.784482 | orchestrator | 2025-09-20 09:41:24 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:24.785401 | orchestrator | 2025-09-20 09:41:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:27.818847 | orchestrator | 2025-09-20 09:41:27 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state STARTED 2025-09-20 09:41:27.821216 | orchestrator | 2025-09-20 09:41:27 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:27.824747 | orchestrator | 2025-09-20 09:41:27 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:27.826135 | orchestrator | 2025-09-20 09:41:27 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:27.826160 | orchestrator | 2025-09-20 09:41:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:30.856930 | orchestrator | 2025-09-20 09:41:30 | INFO  | Task ff579a86-2eba-4571-a7ed-4b92095d0d08 is in state SUCCESS 2025-09-20 09:41:30.857785 | orchestrator | 2025-09-20 09:41:30.857802 | orchestrator | 2025-09-20 09:41:30.857808 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-20 09:41:30.857813 | orchestrator | 2025-09-20 09:41:30.857818 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-20 09:41:30.857823 | orchestrator | Saturday 20 September 2025 09:40:13 +0000 (0:00:00.158) 0:00:00.158 **** 2025-09-20 09:41:30.857828 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-20 09:41:30.857833 | orchestrator | 2025-09-20 09:41:30.857838 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-20 09:41:30.857842 | orchestrator | Saturday 20 September 2025 09:40:14 +0000 (0:00:00.687) 0:00:00.846 **** 2025-09-20 09:41:30.857847 | orchestrator | changed: [testbed-manager] 2025-09-20 09:41:30.857852 | orchestrator | 2025-09-20 09:41:30.857857 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-20 09:41:30.857861 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:01.314) 0:00:02.161 **** 2025-09-20 09:41:30.857866 | orchestrator | changed: [testbed-manager] 2025-09-20 09:41:30.857870 | orchestrator | 2025-09-20 09:41:30.857874 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:41:30.857878 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:41:30.857883 | orchestrator | 2025-09-20 09:41:30.857887 | orchestrator | 2025-09-20 09:41:30.857891 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:41:30.857895 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:00.673) 0:00:02.834 **** 2025-09-20 09:41:30.857910 | orchestrator | =============================================================================== 2025-09-20 09:41:30.857914 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.31s 2025-09-20 09:41:30.857918 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-09-20 09:41:30.857922 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.67s 2025-09-20 09:41:30.857926 | orchestrator | 2025-09-20 09:41:30.857930 | orchestrator | 2025-09-20 09:41:30.857933 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-20 09:41:30.857937 | orchestrator | 2025-09-20 09:41:30.857941 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-20 09:41:30.857961 | orchestrator | Saturday 20 September 2025 09:40:13 +0000 (0:00:00.142) 0:00:00.142 **** 2025-09-20 09:41:30.857965 | orchestrator | ok: [testbed-manager] 2025-09-20 09:41:30.857969 | orchestrator | 2025-09-20 09:41:30.857973 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-20 09:41:30.857977 | orchestrator | Saturday 20 September 2025 09:40:13 +0000 (0:00:00.522) 0:00:00.665 **** 2025-09-20 09:41:30.857981 | orchestrator | ok: [testbed-manager] 2025-09-20 09:41:30.857984 | orchestrator | 2025-09-20 09:41:30.857988 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-20 09:41:30.857992 | orchestrator | Saturday 20 September 2025 09:40:14 +0000 (0:00:00.513) 0:00:01.178 **** 2025-09-20 09:41:30.857995 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-20 09:41:30.857999 | orchestrator | 2025-09-20 09:41:30.858003 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-20 09:41:30.858006 | orchestrator | Saturday 20 September 2025 09:40:14 +0000 (0:00:00.677) 0:00:01.856 **** 2025-09-20 09:41:30.858010 | orchestrator | changed: [testbed-manager] 2025-09-20 09:41:30.858014 | orchestrator | 2025-09-20 09:41:30.858090 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-20 09:41:30.858094 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:01.011) 0:00:02.867 **** 2025-09-20 09:41:30.858098 | orchestrator | changed: [testbed-manager] 2025-09-20 09:41:30.858102 | orchestrator | 2025-09-20 09:41:30.858106 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-20 09:41:30.858109 | orchestrator | Saturday 20 September 2025 09:40:16 +0000 (0:00:00.878) 0:00:03.746 **** 2025-09-20 09:41:30.858132 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 09:41:30.858136 | orchestrator | 2025-09-20 09:41:30.858140 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-20 09:41:30.858143 | orchestrator | Saturday 20 September 2025 09:40:18 +0000 (0:00:01.518) 0:00:05.264 **** 2025-09-20 09:41:30.858147 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 09:41:30.858151 | orchestrator | 2025-09-20 09:41:30.858155 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-20 09:41:30.858159 | orchestrator | Saturday 20 September 2025 09:40:19 +0000 (0:00:00.788) 0:00:06.052 **** 2025-09-20 09:41:30.858163 | orchestrator | ok: [testbed-manager] 2025-09-20 09:41:30.858166 | orchestrator | 2025-09-20 09:41:30.858170 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-20 09:41:30.858174 | orchestrator | Saturday 20 September 2025 09:40:19 +0000 (0:00:00.399) 0:00:06.451 **** 2025-09-20 09:41:30.858177 | orchestrator | ok: [testbed-manager] 2025-09-20 09:41:30.858181 | orchestrator | 2025-09-20 09:41:30.858185 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:41:30.858189 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:41:30.858193 | orchestrator | 2025-09-20 09:41:30.858197 | orchestrator | 2025-09-20 09:41:30.858201 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:41:30.858204 | orchestrator | Saturday 20 September 2025 09:40:19 +0000 (0:00:00.334) 0:00:06.786 **** 2025-09-20 09:41:30.858208 | orchestrator | =============================================================================== 2025-09-20 09:41:30.858212 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-09-20 09:41:30.858215 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2025-09-20 09:41:30.858219 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.88s 2025-09-20 09:41:30.858230 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.79s 2025-09-20 09:41:30.858234 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.68s 2025-09-20 09:41:30.858238 | orchestrator | Get home directory of operator user ------------------------------------- 0.52s 2025-09-20 09:41:30.858246 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2025-09-20 09:41:30.858250 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.40s 2025-09-20 09:41:30.858254 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.33s 2025-09-20 09:41:30.858257 | orchestrator | 2025-09-20 09:41:30.858261 | orchestrator | 2025-09-20 09:41:30.858265 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-20 09:41:30.858268 | orchestrator | 2025-09-20 09:41:30.858272 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-20 09:41:30.858276 | orchestrator | Saturday 20 September 2025 09:39:05 +0000 (0:00:00.095) 0:00:00.095 **** 2025-09-20 09:41:30.858280 | orchestrator | ok: [localhost] => { 2025-09-20 09:41:30.858284 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-20 09:41:30.858288 | orchestrator | } 2025-09-20 09:41:30.858292 | orchestrator | 2025-09-20 09:41:30.858296 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-20 09:41:30.858300 | orchestrator | Saturday 20 September 2025 09:39:05 +0000 (0:00:00.030) 0:00:00.126 **** 2025-09-20 09:41:30.858309 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-20 09:41:30.858314 | orchestrator | ...ignoring 2025-09-20 09:41:30.858318 | orchestrator | 2025-09-20 09:41:30.858322 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-20 09:41:30.858326 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:03.700) 0:00:03.827 **** 2025-09-20 09:41:30.858330 | orchestrator | skipping: [localhost] 2025-09-20 09:41:30.858333 | orchestrator | 2025-09-20 09:41:30.858337 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-20 09:41:30.858341 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.085) 0:00:03.912 **** 2025-09-20 09:41:30.858345 | orchestrator | ok: [localhost] 2025-09-20 09:41:30.858348 | orchestrator | 2025-09-20 09:41:30.858352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:41:30.858356 | orchestrator | 2025-09-20 09:41:30.858360 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:41:30.858364 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.284) 0:00:04.197 **** 2025-09-20 09:41:30.858367 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:41:30.858371 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:41:30.858375 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:41:30.858379 | orchestrator | 2025-09-20 09:41:30.858382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:41:30.858386 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.396) 0:00:04.594 **** 2025-09-20 09:41:30.858390 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-20 09:41:30.858394 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-20 09:41:30.858398 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-20 09:41:30.858402 | orchestrator | 2025-09-20 09:41:30.858405 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-20 09:41:30.858409 | orchestrator | 2025-09-20 09:41:30.858413 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-20 09:41:30.858417 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:01.028) 0:00:05.622 **** 2025-09-20 09:41:30.858420 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:41:30.858424 | orchestrator | 2025-09-20 09:41:30.858428 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-20 09:41:30.858432 | orchestrator | Saturday 20 September 2025 09:39:11 +0000 (0:00:01.018) 0:00:06.641 **** 2025-09-20 09:41:30.858436 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:41:30.858443 | orchestrator | 2025-09-20 09:41:30.858447 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-20 09:41:30.858451 | orchestrator | Saturday 20 September 2025 09:39:13 +0000 (0:00:01.646) 0:00:08.287 **** 2025-09-20 09:41:30.858454 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858458 | orchestrator | 2025-09-20 09:41:30.858462 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-20 09:41:30.858466 | orchestrator | Saturday 20 September 2025 09:39:14 +0000 (0:00:00.721) 0:00:09.009 **** 2025-09-20 09:41:30.858470 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858473 | orchestrator | 2025-09-20 09:41:30.858477 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-20 09:41:30.858481 | orchestrator | Saturday 20 September 2025 09:39:14 +0000 (0:00:00.512) 0:00:09.521 **** 2025-09-20 09:41:30.858484 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858488 | orchestrator | 2025-09-20 09:41:30.858492 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-20 09:41:30.858496 | orchestrator | Saturday 20 September 2025 09:39:15 +0000 (0:00:00.397) 0:00:09.919 **** 2025-09-20 09:41:30.858500 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858503 | orchestrator | 2025-09-20 09:41:30.858507 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-20 09:41:30.858511 | orchestrator | Saturday 20 September 2025 09:39:15 +0000 (0:00:00.496) 0:00:10.415 **** 2025-09-20 09:41:30.858538 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:41:30.858542 | orchestrator | 2025-09-20 09:41:30.858546 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-20 09:41:30.858552 | orchestrator | Saturday 20 September 2025 09:39:16 +0000 (0:00:01.041) 0:00:11.456 **** 2025-09-20 09:41:30.858556 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:41:30.858560 | orchestrator | 2025-09-20 09:41:30.858564 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-20 09:41:30.858568 | orchestrator | Saturday 20 September 2025 09:39:17 +0000 (0:00:00.937) 0:00:12.393 **** 2025-09-20 09:41:30.858571 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858575 | orchestrator | 2025-09-20 09:41:30.858579 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-20 09:41:30.858583 | orchestrator | Saturday 20 September 2025 09:39:18 +0000 (0:00:00.557) 0:00:12.950 **** 2025-09-20 09:41:30.858587 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858590 | orchestrator | 2025-09-20 09:41:30.858594 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-20 09:41:30.858598 | orchestrator | Saturday 20 September 2025 09:39:18 +0000 (0:00:00.415) 0:00:13.366 **** 2025-09-20 09:41:30.858639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858674 | orchestrator | 2025-09-20 09:41:30.858678 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-20 09:41:30.858682 | orchestrator | Saturday 20 September 2025 09:39:19 +0000 (0:00:00.909) 0:00:14.276 **** 2025-09-20 09:41:30.858734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858759 | orchestrator | 2025-09-20 09:41:30.858763 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-20 09:41:30.858767 | orchestrator | Saturday 20 September 2025 09:39:21 +0000 (0:00:01.720) 0:00:15.996 **** 2025-09-20 09:41:30.858771 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-20 09:41:30.858775 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-20 09:41:30.858778 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-20 09:41:30.858782 | orchestrator | 2025-09-20 09:41:30.858786 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-20 09:41:30.858790 | orchestrator | Saturday 20 September 2025 09:39:23 +0000 (0:00:01.805) 0:00:17.802 **** 2025-09-20 09:41:30.858793 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-20 09:41:30.858797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-20 09:41:30.858801 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-20 09:41:30.858805 | orchestrator | 2025-09-20 09:41:30.858808 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-20 09:41:30.858815 | orchestrator | Saturday 20 September 2025 09:39:27 +0000 (0:00:04.856) 0:00:22.658 **** 2025-09-20 09:41:30.858819 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-20 09:41:30.858822 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-20 09:41:30.858826 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-20 09:41:30.858830 | orchestrator | 2025-09-20 09:41:30.858834 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-20 09:41:30.858837 | orchestrator | Saturday 20 September 2025 09:39:29 +0000 (0:00:01.847) 0:00:24.506 **** 2025-09-20 09:41:30.858841 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-20 09:41:30.858845 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-20 09:41:30.858849 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-20 09:41:30.858852 | orchestrator | 2025-09-20 09:41:30.858856 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-20 09:41:30.858863 | orchestrator | Saturday 20 September 2025 09:39:31 +0000 (0:00:02.206) 0:00:26.713 **** 2025-09-20 09:41:30.858867 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-20 09:41:30.858871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-20 09:41:30.858877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-20 09:41:30.858880 | orchestrator | 2025-09-20 09:41:30.858884 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-20 09:41:30.858888 | orchestrator | Saturday 20 September 2025 09:39:33 +0000 (0:00:01.806) 0:00:28.519 **** 2025-09-20 09:41:30.858892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-20 09:41:30.858895 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-20 09:41:30.858899 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-20 09:41:30.858903 | orchestrator | 2025-09-20 09:41:30.858907 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-20 09:41:30.858910 | orchestrator | Saturday 20 September 2025 09:39:36 +0000 (0:00:02.502) 0:00:31.022 **** 2025-09-20 09:41:30.858914 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.858918 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:41:30.858921 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:41:30.858925 | orchestrator | 2025-09-20 09:41:30.858929 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-20 09:41:30.858933 | orchestrator | Saturday 20 September 2025 09:39:36 +0000 (0:00:00.651) 0:00:31.674 **** 2025-09-20 09:41:30.858937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:41:30.858958 | orchestrator | 2025-09-20 09:41:30.858962 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-20 09:41:30.858965 | orchestrator | Saturday 20 September 2025 09:39:39 +0000 (0:00:02.967) 0:00:34.642 **** 2025-09-20 09:41:30.858969 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:41:30.858973 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:41:30.858977 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:41:30.858980 | orchestrator | 2025-09-20 09:41:30.858984 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-20 09:41:30.858988 | orchestrator | Saturday 20 September 2025 09:39:41 +0000 (0:00:01.410) 0:00:36.052 **** 2025-09-20 09:41:30.858992 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:41:30.858995 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:41:30.858999 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:41:30.859003 | orchestrator | 2025-09-20 09:41:30.859006 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-20 09:41:30.859010 | orchestrator | Saturday 20 September 2025 09:39:48 +0000 (0:00:07.415) 0:00:43.468 **** 2025-09-20 09:41:30.859014 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:41:30.859018 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:41:30.859021 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:41:30.859025 | orchestrator | 2025-09-20 09:41:30.859029 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-20 09:41:30.859032 | orchestrator | 2025-09-20 09:41:30.859036 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-20 09:41:30.859040 | orchestrator | Saturday 20 September 2025 09:39:49 +0000 (0:00:00.849) 0:00:44.317 **** 2025-09-20 09:41:30.859043 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:41:30.859047 | orchestrator | 2025-09-20 09:41:30.859051 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-20 09:41:30.859055 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.707) 0:00:45.025 **** 2025-09-20 09:41:30.859058 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:41:30.859062 | orchestrator | 2025-09-20 09:41:30.859066 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-20 09:41:30.859069 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.367) 0:00:45.393 **** 2025-09-20 09:41:30.859073 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:41:30.859077 | orchestrator | 2025-09-20 09:41:30.859080 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-20 09:41:30.859084 | orchestrator | Saturday 20 September 2025 09:39:52 +0000 (0:00:02.225) 0:00:47.618 **** 2025-09-20 09:41:30.859088 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:41:30.859091 | orchestrator | 2025-09-20 09:41:30.859095 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-20 09:41:30.859099 | orchestrator | 2025-09-20 09:41:30.859103 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-20 09:41:30.859106 | orchestrator | Saturday 20 September 2025 09:40:48 +0000 (0:00:55.882) 0:01:43.501 **** 2025-09-20 09:41:30.859113 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:41:30.859116 | orchestrator | 2025-09-20 09:41:30.859120 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-20 09:41:30.859124 | orchestrator | Saturday 20 September 2025 09:40:49 +0000 (0:00:00.600) 0:01:44.102 **** 2025-09-20 09:41:30.859127 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:41:30.859131 | orchestrator | 2025-09-20 09:41:30.859135 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-20 09:41:30.859139 | orchestrator | Saturday 20 September 2025 09:40:49 +0000 (0:00:00.202) 0:01:44.304 **** 2025-09-20 09:41:30.859142 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:41:30.859146 | orchestrator | 2025-09-20 09:41:30.859150 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-20 09:41:30.859153 | orchestrator | Saturday 20 September 2025 09:40:56 +0000 (0:00:06.917) 0:01:51.221 **** 2025-09-20 09:41:30.859157 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:41:30.859161 | orchestrator | 2025-09-20 09:41:30.859164 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-20 09:41:30.859168 | orchestrator | 2025-09-20 09:41:30.859172 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-20 09:41:30.859175 | orchestrator | Saturday 20 September 2025 09:41:07 +0000 (0:00:11.295) 0:02:02.516 **** 2025-09-20 09:41:30.859179 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:41:30.859183 | orchestrator | 2025-09-20 09:41:30.859189 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-20 09:41:30.859193 | orchestrator | Saturday 20 September 2025 09:41:08 +0000 (0:00:00.666) 0:02:03.182 **** 2025-09-20 09:41:30.859196 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:41:30.859200 | orchestrator | 2025-09-20 09:41:30.859204 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-20 09:41:30.859207 | orchestrator | Saturday 20 September 2025 09:41:08 +0000 (0:00:00.289) 0:02:03.472 **** 2025-09-20 09:41:30.859211 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:41:30.859215 | orchestrator | 2025-09-20 09:41:30.859219 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-20 09:41:30.859222 | orchestrator | Saturday 20 September 2025 09:41:10 +0000 (0:00:01.711) 0:02:05.183 **** 2025-09-20 09:41:30.859226 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:41:30.859230 | orchestrator | 2025-09-20 09:41:30.859233 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-20 09:41:30.859237 | orchestrator | 2025-09-20 09:41:30.859241 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-20 09:41:30.859245 | orchestrator | Saturday 20 September 2025 09:41:26 +0000 (0:00:16.210) 0:02:21.393 **** 2025-09-20 09:41:30.859248 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:41:30.859252 | orchestrator | 2025-09-20 09:41:30.859256 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-20 09:41:30.859259 | orchestrator | Saturday 20 September 2025 09:41:27 +0000 (0:00:00.521) 0:02:21.915 **** 2025-09-20 09:41:30.859263 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-20 09:41:30.859267 | orchestrator | enable_outward_rabbitmq_True 2025-09-20 09:41:30.859273 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-20 09:41:30.859277 | orchestrator | outward_rabbitmq_restart 2025-09-20 09:41:30.859280 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:41:30.859284 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:41:30.859288 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:41:30.859292 | orchestrator | 2025-09-20 09:41:30.859295 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-20 09:41:30.859299 | orchestrator | skipping: no hosts matched 2025-09-20 09:41:30.859303 | orchestrator | 2025-09-20 09:41:30.859306 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-20 09:41:30.859313 | orchestrator | skipping: no hosts matched 2025-09-20 09:41:30.859317 | orchestrator | 2025-09-20 09:41:30.859320 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-20 09:41:30.859324 | orchestrator | skipping: no hosts matched 2025-09-20 09:41:30.859328 | orchestrator | 2025-09-20 09:41:30.859331 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:41:30.859335 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-20 09:41:30.859340 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 09:41:30.859343 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:41:30.859347 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:41:30.859351 | orchestrator | 2025-09-20 09:41:30.859355 | orchestrator | 2025-09-20 09:41:30.859358 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:41:30.859362 | orchestrator | Saturday 20 September 2025 09:41:29 +0000 (0:00:02.384) 0:02:24.300 **** 2025-09-20 09:41:30.859366 | orchestrator | =============================================================================== 2025-09-20 09:41:30.859369 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.39s 2025-09-20 09:41:30.859373 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.85s 2025-09-20 09:41:30.859377 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.42s 2025-09-20 09:41:30.859381 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.86s 2025-09-20 09:41:30.859384 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.70s 2025-09-20 09:41:30.859388 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.97s 2025-09-20 09:41:30.859392 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.50s 2025-09-20 09:41:30.859395 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.38s 2025-09-20 09:41:30.859399 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.21s 2025-09-20 09:41:30.859403 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.97s 2025-09-20 09:41:30.859406 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.85s 2025-09-20 09:41:30.859410 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.81s 2025-09-20 09:41:30.859414 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.81s 2025-09-20 09:41:30.859418 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.72s 2025-09-20 09:41:30.859421 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.65s 2025-09-20 09:41:30.859425 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.41s 2025-09-20 09:41:30.859429 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.04s 2025-09-20 09:41:30.859434 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.03s 2025-09-20 09:41:30.859438 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.02s 2025-09-20 09:41:30.859442 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.94s 2025-09-20 09:41:30.859446 | orchestrator | 2025-09-20 09:41:30 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:30.860123 | orchestrator | 2025-09-20 09:41:30 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:30.862446 | orchestrator | 2025-09-20 09:41:30 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:30.862834 | orchestrator | 2025-09-20 09:41:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:33.922688 | orchestrator | 2025-09-20 09:41:33 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:33.923369 | orchestrator | 2025-09-20 09:41:33 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:33.923768 | orchestrator | 2025-09-20 09:41:33 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:33.923806 | orchestrator | 2025-09-20 09:41:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:36.967499 | orchestrator | 2025-09-20 09:41:36 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:36.967799 | orchestrator | 2025-09-20 09:41:36 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:36.968760 | orchestrator | 2025-09-20 09:41:36 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:36.968787 | orchestrator | 2025-09-20 09:41:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:40.029726 | orchestrator | 2025-09-20 09:41:40 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:40.033433 | orchestrator | 2025-09-20 09:41:40 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:40.036399 | orchestrator | 2025-09-20 09:41:40 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:40.036434 | orchestrator | 2025-09-20 09:41:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:43.095643 | orchestrator | 2025-09-20 09:41:43 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:43.096916 | orchestrator | 2025-09-20 09:41:43 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:43.100282 | orchestrator | 2025-09-20 09:41:43 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:43.100965 | orchestrator | 2025-09-20 09:41:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:46.145416 | orchestrator | 2025-09-20 09:41:46 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:46.145952 | orchestrator | 2025-09-20 09:41:46 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:46.146628 | orchestrator | 2025-09-20 09:41:46 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:46.146798 | orchestrator | 2025-09-20 09:41:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:49.177263 | orchestrator | 2025-09-20 09:41:49 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:49.177545 | orchestrator | 2025-09-20 09:41:49 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:49.178209 | orchestrator | 2025-09-20 09:41:49 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:49.178240 | orchestrator | 2025-09-20 09:41:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:52.211942 | orchestrator | 2025-09-20 09:41:52 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:52.212654 | orchestrator | 2025-09-20 09:41:52 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:52.213365 | orchestrator | 2025-09-20 09:41:52 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:52.213422 | orchestrator | 2025-09-20 09:41:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:55.241897 | orchestrator | 2025-09-20 09:41:55 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:55.243296 | orchestrator | 2025-09-20 09:41:55 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:55.244770 | orchestrator | 2025-09-20 09:41:55 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:55.244794 | orchestrator | 2025-09-20 09:41:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:41:58.284937 | orchestrator | 2025-09-20 09:41:58 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:41:58.285277 | orchestrator | 2025-09-20 09:41:58 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:41:58.286187 | orchestrator | 2025-09-20 09:41:58 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:41:58.286208 | orchestrator | 2025-09-20 09:41:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:01.325904 | orchestrator | 2025-09-20 09:42:01 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:01.326923 | orchestrator | 2025-09-20 09:42:01 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:01.327707 | orchestrator | 2025-09-20 09:42:01 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:01.327735 | orchestrator | 2025-09-20 09:42:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:04.373838 | orchestrator | 2025-09-20 09:42:04 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:04.376896 | orchestrator | 2025-09-20 09:42:04 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:04.378661 | orchestrator | 2025-09-20 09:42:04 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:04.378697 | orchestrator | 2025-09-20 09:42:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:07.423500 | orchestrator | 2025-09-20 09:42:07 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:07.424089 | orchestrator | 2025-09-20 09:42:07 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:07.424990 | orchestrator | 2025-09-20 09:42:07 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:07.425122 | orchestrator | 2025-09-20 09:42:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:10.473887 | orchestrator | 2025-09-20 09:42:10 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:10.474987 | orchestrator | 2025-09-20 09:42:10 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:10.478373 | orchestrator | 2025-09-20 09:42:10 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:10.478403 | orchestrator | 2025-09-20 09:42:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:13.540950 | orchestrator | 2025-09-20 09:42:13 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:13.541069 | orchestrator | 2025-09-20 09:42:13 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:13.541085 | orchestrator | 2025-09-20 09:42:13 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:13.541097 | orchestrator | 2025-09-20 09:42:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:16.580993 | orchestrator | 2025-09-20 09:42:16 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:16.581318 | orchestrator | 2025-09-20 09:42:16 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:16.582514 | orchestrator | 2025-09-20 09:42:16 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:16.582539 | orchestrator | 2025-09-20 09:42:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:19.638065 | orchestrator | 2025-09-20 09:42:19 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:19.639898 | orchestrator | 2025-09-20 09:42:19 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:19.641669 | orchestrator | 2025-09-20 09:42:19 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:19.641693 | orchestrator | 2025-09-20 09:42:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:22.685725 | orchestrator | 2025-09-20 09:42:22 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:22.687279 | orchestrator | 2025-09-20 09:42:22 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state STARTED 2025-09-20 09:42:22.688428 | orchestrator | 2025-09-20 09:42:22 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:22.688456 | orchestrator | 2025-09-20 09:42:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:25.730293 | orchestrator | 2025-09-20 09:42:25 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:25.737779 | orchestrator | 2025-09-20 09:42:25 | INFO  | Task 54412171-a5cb-4675-b67e-ed3dad523dae is in state SUCCESS 2025-09-20 09:42:25.740664 | orchestrator | 2025-09-20 09:42:25.740710 | orchestrator | 2025-09-20 09:42:25.740722 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:42:25.740734 | orchestrator | 2025-09-20 09:42:25.740745 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:42:25.740757 | orchestrator | Saturday 20 September 2025 09:40:05 +0000 (0:00:00.307) 0:00:00.307 **** 2025-09-20 09:42:25.740769 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:42:25.740781 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:42:25.740792 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:42:25.740803 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.740813 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.740824 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.740835 | orchestrator | 2025-09-20 09:42:25.740846 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:42:25.740857 | orchestrator | Saturday 20 September 2025 09:40:07 +0000 (0:00:01.809) 0:00:02.116 **** 2025-09-20 09:42:25.740868 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-20 09:42:25.740888 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-20 09:42:25.740899 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-20 09:42:25.740910 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-20 09:42:25.740921 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-20 09:42:25.740931 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-20 09:42:25.740942 | orchestrator | 2025-09-20 09:42:25.740953 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-20 09:42:25.740964 | orchestrator | 2025-09-20 09:42:25.740975 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-20 09:42:25.740986 | orchestrator | Saturday 20 September 2025 09:40:09 +0000 (0:00:01.741) 0:00:03.858 **** 2025-09-20 09:42:25.740999 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:42:25.741032 | orchestrator | 2025-09-20 09:42:25.741044 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-20 09:42:25.741054 | orchestrator | Saturday 20 September 2025 09:40:10 +0000 (0:00:01.288) 0:00:05.146 **** 2025-09-20 09:42:25.741068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741139 | orchestrator | 2025-09-20 09:42:25.741164 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-20 09:42:25.741177 | orchestrator | Saturday 20 September 2025 09:40:12 +0000 (0:00:01.822) 0:00:06.969 **** 2025-09-20 09:42:25.741188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741268 | orchestrator | 2025-09-20 09:42:25.741279 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-20 09:42:25.741290 | orchestrator | Saturday 20 September 2025 09:40:14 +0000 (0:00:01.970) 0:00:08.940 **** 2025-09-20 09:42:25.741301 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741412 | orchestrator | 2025-09-20 09:42:25.741424 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-20 09:42:25.741435 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:01.355) 0:00:10.295 **** 2025-09-20 09:42:25.741446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741513 | orchestrator | 2025-09-20 09:42:25.741530 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-20 09:42:25.741541 | orchestrator | Saturday 20 September 2025 09:40:17 +0000 (0:00:01.636) 0:00:11.932 **** 2025-09-20 09:42:25.741562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.741634 | orchestrator | 2025-09-20 09:42:25.741645 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-20 09:42:25.741656 | orchestrator | Saturday 20 September 2025 09:40:18 +0000 (0:00:01.233) 0:00:13.165 **** 2025-09-20 09:42:25.741667 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:42:25.741678 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:42:25.741689 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:42:25.741700 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.741711 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.741721 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.741732 | orchestrator | 2025-09-20 09:42:25.741743 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-20 09:42:25.741754 | orchestrator | Saturday 20 September 2025 09:40:22 +0000 (0:00:03.786) 0:00:16.951 **** 2025-09-20 09:42:25.741765 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-20 09:42:25.741776 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-20 09:42:25.741787 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-20 09:42:25.741798 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-20 09:42:25.741816 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-20 09:42:25.741827 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-20 09:42:25.741838 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 09:42:25.741849 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 09:42:25.741865 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 09:42:25.741876 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 09:42:25.741887 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 09:42:25.741898 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 09:42:25.741911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-20 09:42:25.741921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 09:42:25.741937 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 09:42:25.741948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 09:42:25.741959 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 09:42:25.741971 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 09:42:25.741982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-20 09:42:25.741993 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 09:42:25.742004 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 09:42:25.742085 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 09:42:25.742103 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 09:42:25.742114 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 09:42:25.742125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-20 09:42:25.742135 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 09:42:25.742146 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 09:42:25.742157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 09:42:25.742168 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 09:42:25.742178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 09:42:25.742189 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-20 09:42:25.742200 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 09:42:25.742210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 09:42:25.742221 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 09:42:25.742240 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-20 09:42:25.742251 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 09:42:25.742262 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-20 09:42:25.742273 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-20 09:42:25.742283 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-20 09:42:25.742294 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-20 09:42:25.742305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-20 09:42:25.742316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-20 09:42:25.742326 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-20 09:42:25.742337 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-20 09:42:25.742355 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-20 09:42:25.742435 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-20 09:42:25.742447 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-20 09:42:25.742457 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-20 09:42:25.742468 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-20 09:42:25.742479 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-20 09:42:25.742496 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-20 09:42:25.742507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-20 09:42:25.742517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-20 09:42:25.742527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-20 09:42:25.742537 | orchestrator | 2025-09-20 09:42:25.742546 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 09:42:25.742556 | orchestrator | Saturday 20 September 2025 09:40:41 +0000 (0:00:19.335) 0:00:36.287 **** 2025-09-20 09:42:25.742566 | orchestrator | 2025-09-20 09:42:25.742576 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 09:42:25.742586 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:00.328) 0:00:36.615 **** 2025-09-20 09:42:25.742595 | orchestrator | 2025-09-20 09:42:25.742605 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 09:42:25.742615 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:00.077) 0:00:36.693 **** 2025-09-20 09:42:25.742624 | orchestrator | 2025-09-20 09:42:25.742634 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 09:42:25.742643 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:00.074) 0:00:36.767 **** 2025-09-20 09:42:25.742660 | orchestrator | 2025-09-20 09:42:25.742669 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 09:42:25.742679 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:00.067) 0:00:36.835 **** 2025-09-20 09:42:25.742689 | orchestrator | 2025-09-20 09:42:25.742698 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-20 09:42:25.742708 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:00.069) 0:00:36.905 **** 2025-09-20 09:42:25.742718 | orchestrator | 2025-09-20 09:42:25.742727 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-20 09:42:25.742737 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:00.061) 0:00:36.967 **** 2025-09-20 09:42:25.742747 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:42:25.742756 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:42:25.742766 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.742776 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.742785 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:42:25.742795 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.742804 | orchestrator | 2025-09-20 09:42:25.742814 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-20 09:42:25.742824 | orchestrator | Saturday 20 September 2025 09:40:44 +0000 (0:00:01.560) 0:00:38.527 **** 2025-09-20 09:42:25.742834 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.742843 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:42:25.742853 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.742863 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:42:25.742872 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.742882 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:42:25.742891 | orchestrator | 2025-09-20 09:42:25.742901 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-20 09:42:25.742911 | orchestrator | 2025-09-20 09:42:25.742920 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-20 09:42:25.742930 | orchestrator | Saturday 20 September 2025 09:41:13 +0000 (0:00:29.655) 0:01:08.183 **** 2025-09-20 09:42:25.742940 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:42:25.742950 | orchestrator | 2025-09-20 09:42:25.742959 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-20 09:42:25.742969 | orchestrator | Saturday 20 September 2025 09:41:14 +0000 (0:00:00.757) 0:01:08.941 **** 2025-09-20 09:42:25.742979 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:42:25.742989 | orchestrator | 2025-09-20 09:42:25.742999 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-20 09:42:25.743008 | orchestrator | Saturday 20 September 2025 09:41:14 +0000 (0:00:00.517) 0:01:09.458 **** 2025-09-20 09:42:25.743018 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.743028 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.743037 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.743047 | orchestrator | 2025-09-20 09:42:25.743057 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-20 09:42:25.743066 | orchestrator | Saturday 20 September 2025 09:41:15 +0000 (0:00:01.019) 0:01:10.477 **** 2025-09-20 09:42:25.743076 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.743086 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.743095 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.743110 | orchestrator | 2025-09-20 09:42:25.743121 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-20 09:42:25.743130 | orchestrator | Saturday 20 September 2025 09:41:16 +0000 (0:00:00.393) 0:01:10.870 **** 2025-09-20 09:42:25.743140 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.743150 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.743160 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.743169 | orchestrator | 2025-09-20 09:42:25.743184 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-20 09:42:25.743194 | orchestrator | Saturday 20 September 2025 09:41:16 +0000 (0:00:00.445) 0:01:11.316 **** 2025-09-20 09:42:25.743203 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.743213 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.743223 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.743232 | orchestrator | 2025-09-20 09:42:25.743242 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-20 09:42:25.743251 | orchestrator | Saturday 20 September 2025 09:41:17 +0000 (0:00:00.416) 0:01:11.733 **** 2025-09-20 09:42:25.743268 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.743278 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.743288 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.743297 | orchestrator | 2025-09-20 09:42:25.743307 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-20 09:42:25.743316 | orchestrator | Saturday 20 September 2025 09:41:17 +0000 (0:00:00.611) 0:01:12.344 **** 2025-09-20 09:42:25.743326 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743336 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743345 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743355 | orchestrator | 2025-09-20 09:42:25.743383 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-20 09:42:25.743393 | orchestrator | Saturday 20 September 2025 09:41:18 +0000 (0:00:00.324) 0:01:12.668 **** 2025-09-20 09:42:25.743403 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743412 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743422 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743432 | orchestrator | 2025-09-20 09:42:25.743442 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-20 09:42:25.743451 | orchestrator | Saturday 20 September 2025 09:41:18 +0000 (0:00:00.305) 0:01:12.974 **** 2025-09-20 09:42:25.743461 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743471 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743480 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743490 | orchestrator | 2025-09-20 09:42:25.743500 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-20 09:42:25.743509 | orchestrator | Saturday 20 September 2025 09:41:18 +0000 (0:00:00.309) 0:01:13.284 **** 2025-09-20 09:42:25.743519 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743529 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743538 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743548 | orchestrator | 2025-09-20 09:42:25.743557 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-20 09:42:25.743567 | orchestrator | Saturday 20 September 2025 09:41:19 +0000 (0:00:00.496) 0:01:13.781 **** 2025-09-20 09:42:25.743577 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743586 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743596 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743605 | orchestrator | 2025-09-20 09:42:25.743615 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-20 09:42:25.743625 | orchestrator | Saturday 20 September 2025 09:41:19 +0000 (0:00:00.285) 0:01:14.066 **** 2025-09-20 09:42:25.743634 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743644 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743654 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743663 | orchestrator | 2025-09-20 09:42:25.743673 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-20 09:42:25.743683 | orchestrator | Saturday 20 September 2025 09:41:19 +0000 (0:00:00.308) 0:01:14.375 **** 2025-09-20 09:42:25.743692 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743702 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743711 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743721 | orchestrator | 2025-09-20 09:42:25.743731 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-20 09:42:25.743746 | orchestrator | Saturday 20 September 2025 09:41:20 +0000 (0:00:00.299) 0:01:14.675 **** 2025-09-20 09:42:25.743755 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743765 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743775 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743784 | orchestrator | 2025-09-20 09:42:25.743794 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-20 09:42:25.743804 | orchestrator | Saturday 20 September 2025 09:41:20 +0000 (0:00:00.335) 0:01:15.011 **** 2025-09-20 09:42:25.743813 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743823 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743833 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743842 | orchestrator | 2025-09-20 09:42:25.743852 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-20 09:42:25.743862 | orchestrator | Saturday 20 September 2025 09:41:20 +0000 (0:00:00.502) 0:01:15.514 **** 2025-09-20 09:42:25.743871 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743881 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743891 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743900 | orchestrator | 2025-09-20 09:42:25.743910 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-20 09:42:25.743920 | orchestrator | Saturday 20 September 2025 09:41:21 +0000 (0:00:00.297) 0:01:15.811 **** 2025-09-20 09:42:25.743930 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743939 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.743949 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.743959 | orchestrator | 2025-09-20 09:42:25.743968 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-20 09:42:25.743978 | orchestrator | Saturday 20 September 2025 09:41:21 +0000 (0:00:00.307) 0:01:16.118 **** 2025-09-20 09:42:25.743988 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.743997 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744012 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744022 | orchestrator | 2025-09-20 09:42:25.744032 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-20 09:42:25.744041 | orchestrator | Saturday 20 September 2025 09:41:21 +0000 (0:00:00.340) 0:01:16.459 **** 2025-09-20 09:42:25.744051 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:42:25.744061 | orchestrator | 2025-09-20 09:42:25.744071 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-20 09:42:25.744080 | orchestrator | Saturday 20 September 2025 09:41:22 +0000 (0:00:00.820) 0:01:17.279 **** 2025-09-20 09:42:25.744090 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.744099 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.744109 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.744119 | orchestrator | 2025-09-20 09:42:25.744128 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-20 09:42:25.744142 | orchestrator | Saturday 20 September 2025 09:41:23 +0000 (0:00:00.445) 0:01:17.725 **** 2025-09-20 09:42:25.744152 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.744162 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.744171 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.744181 | orchestrator | 2025-09-20 09:42:25.744191 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-20 09:42:25.744200 | orchestrator | Saturday 20 September 2025 09:41:23 +0000 (0:00:00.465) 0:01:18.191 **** 2025-09-20 09:42:25.744210 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.744220 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744229 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744239 | orchestrator | 2025-09-20 09:42:25.744249 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-20 09:42:25.744259 | orchestrator | Saturday 20 September 2025 09:41:24 +0000 (0:00:00.535) 0:01:18.726 **** 2025-09-20 09:42:25.744274 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.744284 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744293 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744303 | orchestrator | 2025-09-20 09:42:25.744313 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-20 09:42:25.744322 | orchestrator | Saturday 20 September 2025 09:41:24 +0000 (0:00:00.351) 0:01:19.077 **** 2025-09-20 09:42:25.744332 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.744342 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744351 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744376 | orchestrator | 2025-09-20 09:42:25.744386 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-20 09:42:25.744396 | orchestrator | Saturday 20 September 2025 09:41:24 +0000 (0:00:00.348) 0:01:19.426 **** 2025-09-20 09:42:25.744406 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.744415 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744425 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744434 | orchestrator | 2025-09-20 09:42:25.744444 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-20 09:42:25.744454 | orchestrator | Saturday 20 September 2025 09:41:25 +0000 (0:00:00.334) 0:01:19.761 **** 2025-09-20 09:42:25.744463 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.744473 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744482 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744492 | orchestrator | 2025-09-20 09:42:25.744502 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-20 09:42:25.744511 | orchestrator | Saturday 20 September 2025 09:41:25 +0000 (0:00:00.599) 0:01:20.360 **** 2025-09-20 09:42:25.744521 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.744530 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.744540 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.744549 | orchestrator | 2025-09-20 09:42:25.744559 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-20 09:42:25.744569 | orchestrator | Saturday 20 September 2025 09:41:26 +0000 (0:00:00.407) 0:01:20.768 **** 2025-09-20 09:42:25.744579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744710 | orchestrator | 2025-09-20 09:42:25.744720 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-20 09:42:25.744729 | orchestrator | Saturday 20 September 2025 09:41:27 +0000 (0:00:01.567) 0:01:22.335 **** 2025-09-20 09:42:25.744739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744844 | orchestrator | 2025-09-20 09:42:25.744853 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-20 09:42:25.744863 | orchestrator | Saturday 20 September 2025 09:41:31 +0000 (0:00:03.880) 0:01:26.216 **** 2025-09-20 09:42:25.744873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.744977 | orchestrator | 2025-09-20 09:42:25.744987 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 09:42:25.744997 | orchestrator | Saturday 20 September 2025 09:41:33 +0000 (0:00:02.037) 0:01:28.254 **** 2025-09-20 09:42:25.745006 | orchestrator | 2025-09-20 09:42:25.745016 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 09:42:25.745026 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:00.297) 0:01:28.551 **** 2025-09-20 09:42:25.745035 | orchestrator | 2025-09-20 09:42:25.745045 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 09:42:25.745054 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:00.068) 0:01:28.619 **** 2025-09-20 09:42:25.745064 | orchestrator | 2025-09-20 09:42:25.745073 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-20 09:42:25.745083 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:00.068) 0:01:28.687 **** 2025-09-20 09:42:25.745092 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.745102 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.745111 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.745121 | orchestrator | 2025-09-20 09:42:25.745130 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-20 09:42:25.745140 | orchestrator | Saturday 20 September 2025 09:41:40 +0000 (0:00:06.830) 0:01:35.517 **** 2025-09-20 09:42:25.745149 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.745159 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.745169 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.745178 | orchestrator | 2025-09-20 09:42:25.745188 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-20 09:42:25.745197 | orchestrator | Saturday 20 September 2025 09:41:43 +0000 (0:00:02.957) 0:01:38.475 **** 2025-09-20 09:42:25.745207 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.745216 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.745226 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.745235 | orchestrator | 2025-09-20 09:42:25.745245 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-20 09:42:25.745259 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:02.538) 0:01:41.014 **** 2025-09-20 09:42:25.745269 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.745278 | orchestrator | 2025-09-20 09:42:25.745288 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-20 09:42:25.745297 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:00.121) 0:01:41.135 **** 2025-09-20 09:42:25.745307 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.745316 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.745326 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.745335 | orchestrator | 2025-09-20 09:42:25.745345 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-20 09:42:25.745354 | orchestrator | Saturday 20 September 2025 09:41:47 +0000 (0:00:01.034) 0:01:42.169 **** 2025-09-20 09:42:25.745379 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.745389 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.745399 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.745408 | orchestrator | 2025-09-20 09:42:25.745418 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-20 09:42:25.745428 | orchestrator | Saturday 20 September 2025 09:41:48 +0000 (0:00:00.691) 0:01:42.860 **** 2025-09-20 09:42:25.745437 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.745447 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.745457 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.745466 | orchestrator | 2025-09-20 09:42:25.745476 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-20 09:42:25.745485 | orchestrator | Saturday 20 September 2025 09:41:49 +0000 (0:00:00.807) 0:01:43.668 **** 2025-09-20 09:42:25.745495 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.745505 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.745514 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.745524 | orchestrator | 2025-09-20 09:42:25.745533 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-20 09:42:25.745543 | orchestrator | Saturday 20 September 2025 09:41:49 +0000 (0:00:00.652) 0:01:44.321 **** 2025-09-20 09:42:25.745553 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.745563 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.745577 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.745587 | orchestrator | 2025-09-20 09:42:25.745597 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-20 09:42:25.745607 | orchestrator | Saturday 20 September 2025 09:41:50 +0000 (0:00:01.059) 0:01:45.380 **** 2025-09-20 09:42:25.745616 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.745626 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.745636 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.745645 | orchestrator | 2025-09-20 09:42:25.745655 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-20 09:42:25.745664 | orchestrator | Saturday 20 September 2025 09:41:51 +0000 (0:00:00.737) 0:01:46.118 **** 2025-09-20 09:42:25.745674 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.745684 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.745693 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.745703 | orchestrator | 2025-09-20 09:42:25.745712 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-20 09:42:25.745726 | orchestrator | Saturday 20 September 2025 09:41:51 +0000 (0:00:00.327) 0:01:46.445 **** 2025-09-20 09:42:25.745736 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745746 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745762 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745793 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745802 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745812 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745833 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745843 | orchestrator | 2025-09-20 09:42:25.745853 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-20 09:42:25.745863 | orchestrator | Saturday 20 September 2025 09:41:53 +0000 (0:00:01.361) 0:01:47.807 **** 2025-09-20 09:42:25.745872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745887 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.745972 | orchestrator | 2025-09-20 09:42:25.745982 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-20 09:42:25.745992 | orchestrator | Saturday 20 September 2025 09:41:57 +0000 (0:00:04.441) 0:01:52.248 **** 2025-09-20 09:42:25.746007 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746043 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746100 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746130 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:42:25.746140 | orchestrator | 2025-09-20 09:42:25.746149 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 09:42:25.746159 | orchestrator | Saturday 20 September 2025 09:42:00 +0000 (0:00:02.725) 0:01:54.974 **** 2025-09-20 09:42:25.746168 | orchestrator | 2025-09-20 09:42:25.746178 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 09:42:25.746187 | orchestrator | Saturday 20 September 2025 09:42:00 +0000 (0:00:00.062) 0:01:55.036 **** 2025-09-20 09:42:25.746197 | orchestrator | 2025-09-20 09:42:25.746206 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-20 09:42:25.746216 | orchestrator | Saturday 20 September 2025 09:42:00 +0000 (0:00:00.061) 0:01:55.098 **** 2025-09-20 09:42:25.746225 | orchestrator | 2025-09-20 09:42:25.746235 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-20 09:42:25.746244 | orchestrator | Saturday 20 September 2025 09:42:00 +0000 (0:00:00.060) 0:01:55.158 **** 2025-09-20 09:42:25.746254 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.746269 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.746279 | orchestrator | 2025-09-20 09:42:25.746295 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-20 09:42:25.746305 | orchestrator | Saturday 20 September 2025 09:42:06 +0000 (0:00:06.186) 0:02:01.345 **** 2025-09-20 09:42:25.746314 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.746324 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.746333 | orchestrator | 2025-09-20 09:42:25.746343 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-20 09:42:25.746353 | orchestrator | Saturday 20 September 2025 09:42:13 +0000 (0:00:06.599) 0:02:07.944 **** 2025-09-20 09:42:25.746386 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:42:25.746396 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:42:25.746406 | orchestrator | 2025-09-20 09:42:25.746415 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-20 09:42:25.746425 | orchestrator | Saturday 20 September 2025 09:42:20 +0000 (0:00:06.879) 0:02:14.824 **** 2025-09-20 09:42:25.746434 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:42:25.746444 | orchestrator | 2025-09-20 09:42:25.746457 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-20 09:42:25.746467 | orchestrator | Saturday 20 September 2025 09:42:20 +0000 (0:00:00.154) 0:02:14.978 **** 2025-09-20 09:42:25.746477 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.746486 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.746496 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.746505 | orchestrator | 2025-09-20 09:42:25.746515 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-20 09:42:25.746524 | orchestrator | Saturday 20 September 2025 09:42:21 +0000 (0:00:00.757) 0:02:15.736 **** 2025-09-20 09:42:25.746534 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.746543 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.746552 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.746562 | orchestrator | 2025-09-20 09:42:25.746572 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-20 09:42:25.746581 | orchestrator | Saturday 20 September 2025 09:42:21 +0000 (0:00:00.601) 0:02:16.338 **** 2025-09-20 09:42:25.746591 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.746600 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.746610 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.746619 | orchestrator | 2025-09-20 09:42:25.746629 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-20 09:42:25.746639 | orchestrator | Saturday 20 September 2025 09:42:22 +0000 (0:00:00.750) 0:02:17.088 **** 2025-09-20 09:42:25.746648 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:42:25.746658 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:42:25.746667 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:42:25.746676 | orchestrator | 2025-09-20 09:42:25.746686 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-20 09:42:25.746696 | orchestrator | Saturday 20 September 2025 09:42:23 +0000 (0:00:00.831) 0:02:17.919 **** 2025-09-20 09:42:25.746705 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.746715 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.746724 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.746733 | orchestrator | 2025-09-20 09:42:25.746743 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-20 09:42:25.746752 | orchestrator | Saturday 20 September 2025 09:42:24 +0000 (0:00:00.735) 0:02:18.655 **** 2025-09-20 09:42:25.746762 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:42:25.746771 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:42:25.746781 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:42:25.746790 | orchestrator | 2025-09-20 09:42:25.746800 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:42:25.746809 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-20 09:42:25.746825 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-20 09:42:25.746835 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-20 09:42:25.746845 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:42:25.746855 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:42:25.746864 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:42:25.746874 | orchestrator | 2025-09-20 09:42:25.746883 | orchestrator | 2025-09-20 09:42:25.746893 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:42:25.746902 | orchestrator | Saturday 20 September 2025 09:42:24 +0000 (0:00:00.852) 0:02:19.507 **** 2025-09-20 09:42:25.746912 | orchestrator | =============================================================================== 2025-09-20 09:42:25.746921 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.66s 2025-09-20 09:42:25.746931 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.34s 2025-09-20 09:42:25.746941 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.02s 2025-09-20 09:42:25.746950 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.56s 2025-09-20 09:42:25.746960 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.42s 2025-09-20 09:42:25.746969 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.44s 2025-09-20 09:42:25.746979 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2025-09-20 09:42:25.746993 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.79s 2025-09-20 09:42:25.747003 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.73s 2025-09-20 09:42:25.747013 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.04s 2025-09-20 09:42:25.747022 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.97s 2025-09-20 09:42:25.747032 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.82s 2025-09-20 09:42:25.747041 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.81s 2025-09-20 09:42:25.747051 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.74s 2025-09-20 09:42:25.747060 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.64s 2025-09-20 09:42:25.747069 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-09-20 09:42:25.747083 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.56s 2025-09-20 09:42:25.747092 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2025-09-20 09:42:25.747102 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.36s 2025-09-20 09:42:25.747112 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.29s 2025-09-20 09:42:25.747121 | orchestrator | 2025-09-20 09:42:25 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:25.747131 | orchestrator | 2025-09-20 09:42:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:28.786158 | orchestrator | 2025-09-20 09:42:28 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:28.790161 | orchestrator | 2025-09-20 09:42:28 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:28.790467 | orchestrator | 2025-09-20 09:42:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:31.839911 | orchestrator | 2025-09-20 09:42:31 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:31.842085 | orchestrator | 2025-09-20 09:42:31 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:31.842116 | orchestrator | 2025-09-20 09:42:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:34.887743 | orchestrator | 2025-09-20 09:42:34 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:34.888486 | orchestrator | 2025-09-20 09:42:34 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:34.888703 | orchestrator | 2025-09-20 09:42:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:37.940940 | orchestrator | 2025-09-20 09:42:37 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:37.941043 | orchestrator | 2025-09-20 09:42:37 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:37.941059 | orchestrator | 2025-09-20 09:42:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:40.998637 | orchestrator | 2025-09-20 09:42:40 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:41.000376 | orchestrator | 2025-09-20 09:42:40 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:41.001082 | orchestrator | 2025-09-20 09:42:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:44.048893 | orchestrator | 2025-09-20 09:42:44 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:44.050644 | orchestrator | 2025-09-20 09:42:44 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:44.050876 | orchestrator | 2025-09-20 09:42:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:47.087716 | orchestrator | 2025-09-20 09:42:47 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:47.088889 | orchestrator | 2025-09-20 09:42:47 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:47.088922 | orchestrator | 2025-09-20 09:42:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:50.139613 | orchestrator | 2025-09-20 09:42:50 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:50.141875 | orchestrator | 2025-09-20 09:42:50 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:50.142282 | orchestrator | 2025-09-20 09:42:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:53.193152 | orchestrator | 2025-09-20 09:42:53 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:53.193498 | orchestrator | 2025-09-20 09:42:53 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:53.194456 | orchestrator | 2025-09-20 09:42:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:56.247552 | orchestrator | 2025-09-20 09:42:56 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:56.249088 | orchestrator | 2025-09-20 09:42:56 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:56.249504 | orchestrator | 2025-09-20 09:42:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:42:59.302259 | orchestrator | 2025-09-20 09:42:59 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:42:59.303089 | orchestrator | 2025-09-20 09:42:59 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:42:59.303547 | orchestrator | 2025-09-20 09:42:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:02.359749 | orchestrator | 2025-09-20 09:43:02 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:02.359875 | orchestrator | 2025-09-20 09:43:02 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:02.359891 | orchestrator | 2025-09-20 09:43:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:05.415253 | orchestrator | 2025-09-20 09:43:05 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:05.415546 | orchestrator | 2025-09-20 09:43:05 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:05.415568 | orchestrator | 2025-09-20 09:43:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:08.452464 | orchestrator | 2025-09-20 09:43:08 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:08.452738 | orchestrator | 2025-09-20 09:43:08 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:08.452881 | orchestrator | 2025-09-20 09:43:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:11.502195 | orchestrator | 2025-09-20 09:43:11 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:11.503614 | orchestrator | 2025-09-20 09:43:11 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:11.503655 | orchestrator | 2025-09-20 09:43:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:14.562245 | orchestrator | 2025-09-20 09:43:14 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:14.562398 | orchestrator | 2025-09-20 09:43:14 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:14.563529 | orchestrator | 2025-09-20 09:43:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:17.606367 | orchestrator | 2025-09-20 09:43:17 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:17.607718 | orchestrator | 2025-09-20 09:43:17 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:17.607978 | orchestrator | 2025-09-20 09:43:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:20.644416 | orchestrator | 2025-09-20 09:43:20 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:20.644955 | orchestrator | 2025-09-20 09:43:20 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:20.645519 | orchestrator | 2025-09-20 09:43:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:23.685144 | orchestrator | 2025-09-20 09:43:23 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:23.686795 | orchestrator | 2025-09-20 09:43:23 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:23.686904 | orchestrator | 2025-09-20 09:43:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:26.741004 | orchestrator | 2025-09-20 09:43:26 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:26.742543 | orchestrator | 2025-09-20 09:43:26 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:26.742770 | orchestrator | 2025-09-20 09:43:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:29.779103 | orchestrator | 2025-09-20 09:43:29 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:29.781414 | orchestrator | 2025-09-20 09:43:29 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:29.781448 | orchestrator | 2025-09-20 09:43:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:32.812913 | orchestrator | 2025-09-20 09:43:32 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:32.813637 | orchestrator | 2025-09-20 09:43:32 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:32.813867 | orchestrator | 2025-09-20 09:43:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:35.857082 | orchestrator | 2025-09-20 09:43:35 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:35.859667 | orchestrator | 2025-09-20 09:43:35 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:35.859719 | orchestrator | 2025-09-20 09:43:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:38.903438 | orchestrator | 2025-09-20 09:43:38 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:38.905431 | orchestrator | 2025-09-20 09:43:38 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:38.905825 | orchestrator | 2025-09-20 09:43:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:41.957116 | orchestrator | 2025-09-20 09:43:41 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:41.957449 | orchestrator | 2025-09-20 09:43:41 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:41.957472 | orchestrator | 2025-09-20 09:43:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:45.001765 | orchestrator | 2025-09-20 09:43:45 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:45.003263 | orchestrator | 2025-09-20 09:43:45 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:45.003555 | orchestrator | 2025-09-20 09:43:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:48.059498 | orchestrator | 2025-09-20 09:43:48 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:48.061449 | orchestrator | 2025-09-20 09:43:48 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:48.061481 | orchestrator | 2025-09-20 09:43:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:51.109685 | orchestrator | 2025-09-20 09:43:51 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:51.110627 | orchestrator | 2025-09-20 09:43:51 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:51.110669 | orchestrator | 2025-09-20 09:43:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:54.154713 | orchestrator | 2025-09-20 09:43:54 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:54.154809 | orchestrator | 2025-09-20 09:43:54 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:54.154822 | orchestrator | 2025-09-20 09:43:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:43:57.195961 | orchestrator | 2025-09-20 09:43:57 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:43:57.196874 | orchestrator | 2025-09-20 09:43:57 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:43:57.196912 | orchestrator | 2025-09-20 09:43:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:00.252328 | orchestrator | 2025-09-20 09:44:00 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:00.253432 | orchestrator | 2025-09-20 09:44:00 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:00.253711 | orchestrator | 2025-09-20 09:44:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:03.302964 | orchestrator | 2025-09-20 09:44:03 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:03.304763 | orchestrator | 2025-09-20 09:44:03 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:03.305053 | orchestrator | 2025-09-20 09:44:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:06.353626 | orchestrator | 2025-09-20 09:44:06 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:06.356147 | orchestrator | 2025-09-20 09:44:06 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:06.356683 | orchestrator | 2025-09-20 09:44:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:09.411068 | orchestrator | 2025-09-20 09:44:09 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:09.411164 | orchestrator | 2025-09-20 09:44:09 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:09.411178 | orchestrator | 2025-09-20 09:44:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:12.472360 | orchestrator | 2025-09-20 09:44:12 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:12.473048 | orchestrator | 2025-09-20 09:44:12 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:12.473084 | orchestrator | 2025-09-20 09:44:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:15.521060 | orchestrator | 2025-09-20 09:44:15 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:15.524292 | orchestrator | 2025-09-20 09:44:15 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:15.524452 | orchestrator | 2025-09-20 09:44:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:18.564808 | orchestrator | 2025-09-20 09:44:18 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:18.570402 | orchestrator | 2025-09-20 09:44:18 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:18.570479 | orchestrator | 2025-09-20 09:44:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:21.612933 | orchestrator | 2025-09-20 09:44:21 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:21.617457 | orchestrator | 2025-09-20 09:44:21 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:21.617492 | orchestrator | 2025-09-20 09:44:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:24.698119 | orchestrator | 2025-09-20 09:44:24 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:24.698256 | orchestrator | 2025-09-20 09:44:24 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:24.698268 | orchestrator | 2025-09-20 09:44:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:27.743287 | orchestrator | 2025-09-20 09:44:27 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:27.745840 | orchestrator | 2025-09-20 09:44:27 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:27.746006 | orchestrator | 2025-09-20 09:44:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:30.795632 | orchestrator | 2025-09-20 09:44:30 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:30.798548 | orchestrator | 2025-09-20 09:44:30 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:30.798578 | orchestrator | 2025-09-20 09:44:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:33.844613 | orchestrator | 2025-09-20 09:44:33 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:33.845743 | orchestrator | 2025-09-20 09:44:33 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:33.846502 | orchestrator | 2025-09-20 09:44:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:36.893447 | orchestrator | 2025-09-20 09:44:36 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:36.895408 | orchestrator | 2025-09-20 09:44:36 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:36.896000 | orchestrator | 2025-09-20 09:44:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:39.939639 | orchestrator | 2025-09-20 09:44:39 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:39.940978 | orchestrator | 2025-09-20 09:44:39 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:39.941382 | orchestrator | 2025-09-20 09:44:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:42.986864 | orchestrator | 2025-09-20 09:44:42 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:42.990511 | orchestrator | 2025-09-20 09:44:42 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:42.990540 | orchestrator | 2025-09-20 09:44:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:46.034246 | orchestrator | 2025-09-20 09:44:46 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:46.034356 | orchestrator | 2025-09-20 09:44:46 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:46.034371 | orchestrator | 2025-09-20 09:44:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:49.086086 | orchestrator | 2025-09-20 09:44:49 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:49.086251 | orchestrator | 2025-09-20 09:44:49 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:49.086274 | orchestrator | 2025-09-20 09:44:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:52.109990 | orchestrator | 2025-09-20 09:44:52 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:52.112207 | orchestrator | 2025-09-20 09:44:52 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:52.112242 | orchestrator | 2025-09-20 09:44:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:55.147271 | orchestrator | 2025-09-20 09:44:55 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:55.148696 | orchestrator | 2025-09-20 09:44:55 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:55.148784 | orchestrator | 2025-09-20 09:44:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:44:58.192761 | orchestrator | 2025-09-20 09:44:58 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:44:58.193401 | orchestrator | 2025-09-20 09:44:58 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:44:58.193432 | orchestrator | 2025-09-20 09:44:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:01.236732 | orchestrator | 2025-09-20 09:45:01 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:01.238763 | orchestrator | 2025-09-20 09:45:01 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:45:01.238800 | orchestrator | 2025-09-20 09:45:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:04.271917 | orchestrator | 2025-09-20 09:45:04 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:04.272011 | orchestrator | 2025-09-20 09:45:04 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:45:04.272026 | orchestrator | 2025-09-20 09:45:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:07.331886 | orchestrator | 2025-09-20 09:45:07 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:07.333011 | orchestrator | 2025-09-20 09:45:07 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:45:07.333053 | orchestrator | 2025-09-20 09:45:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:10.374433 | orchestrator | 2025-09-20 09:45:10 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:10.375818 | orchestrator | 2025-09-20 09:45:10 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:45:10.375844 | orchestrator | 2025-09-20 09:45:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:13.425374 | orchestrator | 2025-09-20 09:45:13 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:13.426453 | orchestrator | 2025-09-20 09:45:13 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:45:13.426626 | orchestrator | 2025-09-20 09:45:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:16.468750 | orchestrator | 2025-09-20 09:45:16 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:16.468852 | orchestrator | 2025-09-20 09:45:16 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state STARTED 2025-09-20 09:45:16.468866 | orchestrator | 2025-09-20 09:45:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:19.503518 | orchestrator | 2025-09-20 09:45:19 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:19.503941 | orchestrator | 2025-09-20 09:45:19 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:19.504964 | orchestrator | 2025-09-20 09:45:19 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:19.513230 | orchestrator | 2025-09-20 09:45:19 | INFO  | Task 322ee894-2307-48d4-9dc9-80a9465f6e85 is in state SUCCESS 2025-09-20 09:45:19.514393 | orchestrator | 2025-09-20 09:45:19.514427 | orchestrator | 2025-09-20 09:45:19.514440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:45:19.514453 | orchestrator | 2025-09-20 09:45:19.514465 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:45:19.514477 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.314) 0:00:00.314 **** 2025-09-20 09:45:19.514488 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.514500 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.514511 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.514522 | orchestrator | 2025-09-20 09:45:19.514561 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:45:19.514705 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:00.439) 0:00:00.754 **** 2025-09-20 09:45:19.514722 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-20 09:45:19.514733 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-20 09:45:19.514759 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-20 09:45:19.514771 | orchestrator | 2025-09-20 09:45:19.514782 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-20 09:45:19.514792 | orchestrator | 2025-09-20 09:45:19.514803 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-20 09:45:19.514814 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:00.725) 0:00:01.480 **** 2025-09-20 09:45:19.514825 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.514836 | orchestrator | 2025-09-20 09:45:19.514846 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-20 09:45:19.514857 | orchestrator | Saturday 20 September 2025 09:38:53 +0000 (0:00:00.811) 0:00:02.291 **** 2025-09-20 09:45:19.514868 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.514879 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.514890 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.514901 | orchestrator | 2025-09-20 09:45:19.514912 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-20 09:45:19.514922 | orchestrator | Saturday 20 September 2025 09:38:55 +0000 (0:00:01.810) 0:00:04.102 **** 2025-09-20 09:45:19.514933 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.514944 | orchestrator | 2025-09-20 09:45:19.514955 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-20 09:45:19.514966 | orchestrator | Saturday 20 September 2025 09:38:56 +0000 (0:00:01.436) 0:00:05.538 **** 2025-09-20 09:45:19.514976 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.514987 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.514998 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.515008 | orchestrator | 2025-09-20 09:45:19.515019 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-20 09:45:19.515030 | orchestrator | Saturday 20 September 2025 09:38:57 +0000 (0:00:00.862) 0:00:06.401 **** 2025-09-20 09:45:19.515041 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-20 09:45:19.515052 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-20 09:45:19.515062 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-20 09:45:19.515073 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-20 09:45:19.515083 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-20 09:45:19.515094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-20 09:45:19.515104 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-20 09:45:19.515116 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-20 09:45:19.515127 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-20 09:45:19.515160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-20 09:45:19.515199 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-20 09:45:19.515211 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-20 09:45:19.515222 | orchestrator | 2025-09-20 09:45:19.515242 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-20 09:45:19.515360 | orchestrator | Saturday 20 September 2025 09:39:00 +0000 (0:00:02.755) 0:00:09.157 **** 2025-09-20 09:45:19.515371 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-20 09:45:19.515382 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-20 09:45:19.515393 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-20 09:45:19.515405 | orchestrator | 2025-09-20 09:45:19.515416 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-20 09:45:19.515427 | orchestrator | Saturday 20 September 2025 09:39:01 +0000 (0:00:00.885) 0:00:10.043 **** 2025-09-20 09:45:19.515438 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-20 09:45:19.515449 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-20 09:45:19.515459 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-20 09:45:19.515470 | orchestrator | 2025-09-20 09:45:19.515481 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-20 09:45:19.515492 | orchestrator | Saturday 20 September 2025 09:39:02 +0000 (0:00:01.699) 0:00:11.742 **** 2025-09-20 09:45:19.515503 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-20 09:45:19.515513 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.515537 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-20 09:45:19.515548 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.515559 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-20 09:45:19.515570 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.515580 | orchestrator | 2025-09-20 09:45:19.515591 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-20 09:45:19.515602 | orchestrator | Saturday 20 September 2025 09:39:03 +0000 (0:00:00.726) 0:00:12.468 **** 2025-09-20 09:45:19.515622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.515641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.515653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.515665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.515685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.515703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.515716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.515763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.515777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.515815 | orchestrator | 2025-09-20 09:45:19.515874 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-20 09:45:19.515886 | orchestrator | Saturday 20 September 2025 09:39:05 +0000 (0:00:02.106) 0:00:14.574 **** 2025-09-20 09:45:19.515897 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.515908 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.515919 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.515930 | orchestrator | 2025-09-20 09:45:19.516003 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-20 09:45:19.516014 | orchestrator | Saturday 20 September 2025 09:39:06 +0000 (0:00:01.203) 0:00:15.778 **** 2025-09-20 09:45:19.516033 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-20 09:45:19.516044 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-20 09:45:19.516055 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-20 09:45:19.516066 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-20 09:45:19.516077 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-20 09:45:19.516087 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-20 09:45:19.516098 | orchestrator | 2025-09-20 09:45:19.516109 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-20 09:45:19.516120 | orchestrator | Saturday 20 September 2025 09:39:08 +0000 (0:00:02.057) 0:00:17.835 **** 2025-09-20 09:45:19.516173 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.516184 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.516195 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.516206 | orchestrator | 2025-09-20 09:45:19.516217 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-20 09:45:19.516251 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:01.165) 0:00:19.001 **** 2025-09-20 09:45:19.516264 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.516275 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.516286 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.516297 | orchestrator | 2025-09-20 09:45:19.516308 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-20 09:45:19.516319 | orchestrator | Saturday 20 September 2025 09:39:12 +0000 (0:00:02.310) 0:00:21.311 **** 2025-09-20 09:45:19.516331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.516352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.516396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.516410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.516430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.516442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.516454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 09:45:19.516467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 09:45:19.516478 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.516489 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.516509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.516522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.516539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.516717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 09:45:19.516736 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.516747 | orchestrator | 2025-09-20 09:45:19.516759 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-20 09:45:19.516770 | orchestrator | Saturday 20 September 2025 09:39:13 +0000 (0:00:01.139) 0:00:22.450 **** 2025-09-20 09:45:19.516842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.516856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.516876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.516894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.516919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.516931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 09:45:19.516942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.516954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.516965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 09:45:19.516983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.517026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d', '__omit_place_holder__eed711291459463d3b99c3a5332abf828cdf4a8d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-20 09:45:19.517038 | orchestrator | 2025-09-20 09:45:19.517049 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-20 09:45:19.517060 | orchestrator | Saturday 20 September 2025 09:39:17 +0000 (0:00:03.512) 0:00:25.963 **** 2025-09-20 09:45:19.517071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.517182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.517194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.517205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.517216 | orchestrator | 2025-09-20 09:45:19.517227 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-20 09:45:19.517238 | orchestrator | Saturday 20 September 2025 09:39:20 +0000 (0:00:03.598) 0:00:29.561 **** 2025-09-20 09:45:19.517249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-20 09:45:19.517260 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-20 09:45:19.517271 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-20 09:45:19.517282 | orchestrator | 2025-09-20 09:45:19.517293 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-20 09:45:19.517304 | orchestrator | Saturday 20 September 2025 09:39:23 +0000 (0:00:02.375) 0:00:31.936 **** 2025-09-20 09:45:19.517314 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-20 09:45:19.517325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-20 09:45:19.517342 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-20 09:45:19.517353 | orchestrator | 2025-09-20 09:45:19.519849 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-20 09:45:19.519893 | orchestrator | Saturday 20 September 2025 09:39:29 +0000 (0:00:06.493) 0:00:38.430 **** 2025-09-20 09:45:19.520013 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.520075 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.520087 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.520099 | orchestrator | 2025-09-20 09:45:19.520110 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-20 09:45:19.520121 | orchestrator | Saturday 20 September 2025 09:39:30 +0000 (0:00:01.298) 0:00:39.729 **** 2025-09-20 09:45:19.520195 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-20 09:45:19.520216 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-20 09:45:19.520228 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-20 09:45:19.520239 | orchestrator | 2025-09-20 09:45:19.520250 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-20 09:45:19.520260 | orchestrator | Saturday 20 September 2025 09:39:33 +0000 (0:00:02.903) 0:00:42.634 **** 2025-09-20 09:45:19.520271 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-20 09:45:19.520282 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-20 09:45:19.520293 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-20 09:45:19.520304 | orchestrator | 2025-09-20 09:45:19.520315 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-20 09:45:19.520326 | orchestrator | Saturday 20 September 2025 09:39:37 +0000 (0:00:03.850) 0:00:46.485 **** 2025-09-20 09:45:19.520337 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-20 09:45:19.520348 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-20 09:45:19.520359 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-20 09:45:19.520370 | orchestrator | 2025-09-20 09:45:19.520381 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-20 09:45:19.520391 | orchestrator | Saturday 20 September 2025 09:39:39 +0000 (0:00:02.209) 0:00:48.695 **** 2025-09-20 09:45:19.520402 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-20 09:45:19.520413 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-20 09:45:19.520424 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-20 09:45:19.520434 | orchestrator | 2025-09-20 09:45:19.520445 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-20 09:45:19.520456 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:02.641) 0:00:51.336 **** 2025-09-20 09:45:19.520466 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.520477 | orchestrator | 2025-09-20 09:45:19.520488 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-20 09:45:19.520498 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:00.554) 0:00:51.891 **** 2025-09-20 09:45:19.520511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.520536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.520559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.520576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.520588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.520634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.520688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.520706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.520717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.520729 | orchestrator | 2025-09-20 09:45:19.520738 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-20 09:45:19.520748 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:03.910) 0:00:55.802 **** 2025-09-20 09:45:19.520766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.520781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.520791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.520801 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.520811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.520821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.520836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.520846 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.520857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.520872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.520887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.520898 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.520908 | orchestrator | 2025-09-20 09:45:19.520917 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-20 09:45:19.520927 | orchestrator | Saturday 20 September 2025 09:39:47 +0000 (0:00:01.085) 0:00:56.888 **** 2025-09-20 09:45:19.520937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.520948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.520963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.520973 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.520983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.520999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521024 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.521033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521069 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.521078 | orchestrator | 2025-09-20 09:45:19.521088 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-20 09:45:19.521098 | orchestrator | Saturday 20 September 2025 09:39:49 +0000 (0:00:01.200) 0:00:58.089 **** 2025-09-20 09:45:19.521107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521165 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.521180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521215 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.521225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521261 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.521271 | orchestrator | 2025-09-20 09:45:19.521281 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-20 09:45:19.521290 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:01.352) 0:00:59.441 **** 2025-09-20 09:45:19.521307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521343 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.521353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521383 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.521398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521439 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.521448 | orchestrator | 2025-09-20 09:45:19.521458 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-20 09:45:19.521468 | orchestrator | Saturday 20 September 2025 09:39:51 +0000 (0:00:01.097) 0:01:00.539 **** 2025-09-20 09:45:19.521477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521545 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.521556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521566 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.521575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521605 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.521615 | orchestrator | 2025-09-20 09:45:19.521624 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-20 09:45:19.521634 | orchestrator | Saturday 20 September 2025 09:39:54 +0000 (0:00:02.416) 0:01:02.955 **** 2025-09-20 09:45:19.521644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521689 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.521699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521772 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.521782 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.521791 | orchestrator | 2025-09-20 09:45:19.521801 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-20 09:45:19.521815 | orchestrator | Saturday 20 September 2025 09:39:55 +0000 (0:00:01.158) 0:01:04.114 **** 2025-09-20 09:45:19.521825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521856 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.521866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.521906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.521931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.521951 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.521961 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.521970 | orchestrator | 2025-09-20 09:45:19.521980 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-20 09:45:19.521990 | orchestrator | Saturday 20 September 2025 09:39:55 +0000 (0:00:00.571) 0:01:04.685 **** 2025-09-20 09:45:19.522000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.522010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.522072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.522084 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.522101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.522116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.522126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.522184 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.522195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-20 09:45:19.522205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-20 09:45:19.522215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-20 09:45:19.522231 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.522241 | orchestrator | 2025-09-20 09:45:19.522251 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-20 09:45:19.522260 | orchestrator | Saturday 20 September 2025 09:39:56 +0000 (0:00:00.893) 0:01:05.578 **** 2025-09-20 09:45:19.522270 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-20 09:45:19.522280 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-20 09:45:19.522295 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-20 09:45:19.522305 | orchestrator | 2025-09-20 09:45:19.522315 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-20 09:45:19.522324 | orchestrator | Saturday 20 September 2025 09:39:58 +0000 (0:00:01.951) 0:01:07.530 **** 2025-09-20 09:45:19.522334 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-20 09:45:19.522343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-20 09:45:19.522353 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-20 09:45:19.522363 | orchestrator | 2025-09-20 09:45:19.522372 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-20 09:45:19.522390 | orchestrator | Saturday 20 September 2025 09:40:00 +0000 (0:00:01.761) 0:01:09.292 **** 2025-09-20 09:45:19.522400 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 09:45:19.522409 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 09:45:19.522419 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.522429 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 09:45:19.522438 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 09:45:19.522448 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 09:45:19.522457 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.522467 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 09:45:19.522476 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.522486 | orchestrator | 2025-09-20 09:45:19.522495 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-20 09:45:19.522505 | orchestrator | Saturday 20 September 2025 09:40:01 +0000 (0:00:01.405) 0:01:10.697 **** 2025-09-20 09:45:19.522515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.522525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.522541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.522557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-20 09:45:19.522568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.522578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.522588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-20 09:45:19.522598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.522633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-20 09:45:19.522643 | orchestrator | 2025-09-20 09:45:19.522653 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-20 09:45:19.522662 | orchestrator | Saturday 20 September 2025 09:40:04 +0000 (0:00:02.920) 0:01:13.618 **** 2025-09-20 09:45:19.522672 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.522681 | orchestrator | 2025-09-20 09:45:19.522691 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-20 09:45:19.522699 | orchestrator | Saturday 20 September 2025 09:40:05 +0000 (0:00:00.671) 0:01:14.289 **** 2025-09-20 09:45:19.522708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-20 09:45:19.522723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.522735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-20 09:45:19.522765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.522773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-20 09:45:19.522806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.522815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522835 | orchestrator | 2025-09-20 09:45:19.522843 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-20 09:45:19.522851 | orchestrator | Saturday 20 September 2025 09:40:11 +0000 (0:00:06.148) 0:01:20.438 **** 2025-09-20 09:45:19.522859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-20 09:45:19.522873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.522885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522906 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.522914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-20 09:45:19.522922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.522931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.522947 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.522975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-20 09:45:19.522985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.522998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523014 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.523022 | orchestrator | 2025-09-20 09:45:19.523030 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-20 09:45:19.523038 | orchestrator | Saturday 20 September 2025 09:40:13 +0000 (0:00:01.912) 0:01:22.351 **** 2025-09-20 09:45:19.523046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-20 09:45:19.523055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-20 09:45:19.523063 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.523071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-20 09:45:19.523079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-20 09:45:19.523154 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.523164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-20 09:45:19.523172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-20 09:45:19.523180 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.523188 | orchestrator | 2025-09-20 09:45:19.523210 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-20 09:45:19.523219 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:01.741) 0:01:24.092 **** 2025-09-20 09:45:19.523227 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.523235 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.523242 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.523250 | orchestrator | 2025-09-20 09:45:19.523258 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-20 09:45:19.523266 | orchestrator | Saturday 20 September 2025 09:40:16 +0000 (0:00:01.653) 0:01:25.745 **** 2025-09-20 09:45:19.523273 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.523287 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.523295 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.523346 | orchestrator | 2025-09-20 09:45:19.523355 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-20 09:45:19.523367 | orchestrator | Saturday 20 September 2025 09:40:18 +0000 (0:00:02.027) 0:01:27.773 **** 2025-09-20 09:45:19.523375 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.523383 | orchestrator | 2025-09-20 09:45:19.523391 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-20 09:45:19.523399 | orchestrator | Saturday 20 September 2025 09:40:19 +0000 (0:00:01.012) 0:01:28.785 **** 2025-09-20 09:45:19.523408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.523417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.523426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.523486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523591 | orchestrator | 2025-09-20 09:45:19.523599 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-20 09:45:19.523607 | orchestrator | Saturday 20 September 2025 09:40:23 +0000 (0:00:03.599) 0:01:32.384 **** 2025-09-20 09:45:19.523621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.523640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523657 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.523665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.523673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523697 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.523710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.523722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.523739 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.523747 | orchestrator | 2025-09-20 09:45:19.523755 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-20 09:45:19.523763 | orchestrator | Saturday 20 September 2025 09:40:24 +0000 (0:00:01.029) 0:01:33.413 **** 2025-09-20 09:45:19.523771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 09:45:19.523779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 09:45:19.523788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 09:45:19.523795 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.523803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 09:45:19.523811 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.523819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 09:45:19.523827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-20 09:45:19.523842 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.523850 | orchestrator | 2025-09-20 09:45:19.523857 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-20 09:45:19.523865 | orchestrator | Saturday 20 September 2025 09:40:25 +0000 (0:00:00.913) 0:01:34.327 **** 2025-09-20 09:45:19.523873 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.523881 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.523889 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.523897 | orchestrator | 2025-09-20 09:45:19.523904 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-20 09:45:19.523912 | orchestrator | Saturday 20 September 2025 09:40:26 +0000 (0:00:01.320) 0:01:35.647 **** 2025-09-20 09:45:19.523920 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.523928 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.523936 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.523944 | orchestrator | 2025-09-20 09:45:19.523956 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-20 09:45:19.523964 | orchestrator | Saturday 20 September 2025 09:40:28 +0000 (0:00:01.992) 0:01:37.640 **** 2025-09-20 09:45:19.523972 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.523980 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.523987 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.523995 | orchestrator | 2025-09-20 09:45:19.524003 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-20 09:45:19.524011 | orchestrator | Saturday 20 September 2025 09:40:29 +0000 (0:00:00.308) 0:01:37.949 **** 2025-09-20 09:45:19.524051 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.524061 | orchestrator | 2025-09-20 09:45:19.524069 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-20 09:45:19.524077 | orchestrator | Saturday 20 September 2025 09:40:29 +0000 (0:00:00.830) 0:01:38.779 **** 2025-09-20 09:45:19.524090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-20 09:45:19.524100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-20 09:45:19.524108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-20 09:45:19.524122 | orchestrator | 2025-09-20 09:45:19.524145 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-20 09:45:19.524154 | orchestrator | Saturday 20 September 2025 09:40:32 +0000 (0:00:02.628) 0:01:41.407 **** 2025-09-20 09:45:19.524168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-20 09:45:19.524177 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.524189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-20 09:45:19.524197 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.524206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-20 09:45:19.524214 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.524222 | orchestrator | 2025-09-20 09:45:19.524230 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-20 09:45:19.524237 | orchestrator | Saturday 20 September 2025 09:40:34 +0000 (0:00:01.812) 0:01:43.220 **** 2025-09-20 09:45:19.524246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 09:45:19.524270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 09:45:19.524280 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.524288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 09:45:19.524296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 09:45:19.524304 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.524317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 09:45:19.524326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-20 09:45:19.524408 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.524418 | orchestrator | 2025-09-20 09:45:19.524426 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-20 09:45:19.524438 | orchestrator | Saturday 20 September 2025 09:40:36 +0000 (0:00:01.844) 0:01:45.064 **** 2025-09-20 09:45:19.524446 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.524454 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.524462 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.524479 | orchestrator | 2025-09-20 09:45:19.524488 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-20 09:45:19.524496 | orchestrator | Saturday 20 September 2025 09:40:36 +0000 (0:00:00.735) 0:01:45.799 **** 2025-09-20 09:45:19.524504 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.524512 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.524519 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.524527 | orchestrator | 2025-09-20 09:45:19.524535 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-20 09:45:19.524543 | orchestrator | Saturday 20 September 2025 09:40:38 +0000 (0:00:01.373) 0:01:47.173 **** 2025-09-20 09:45:19.524551 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.524564 | orchestrator | 2025-09-20 09:45:19.524572 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-20 09:45:19.524580 | orchestrator | Saturday 20 September 2025 09:40:39 +0000 (0:00:00.784) 0:01:47.957 **** 2025-09-20 09:45:19.524588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.524597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.524646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.524687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524720 | orchestrator | 2025-09-20 09:45:19.524729 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-20 09:45:19.524737 | orchestrator | Saturday 20 September 2025 09:40:43 +0000 (0:00:04.279) 0:01:52.237 **** 2025-09-20 09:45:19.524745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.524753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.524804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524821 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.524829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524859 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.524876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.524885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.524910 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.524918 | orchestrator | 2025-09-20 09:45:19.524926 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-20 09:45:19.524934 | orchestrator | Saturday 20 September 2025 09:40:44 +0000 (0:00:00.805) 0:01:53.042 **** 2025-09-20 09:45:19.524942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 09:45:19.524964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 09:45:19.524973 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.524981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 09:45:19.524994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 09:45:19.525002 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.525014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 09:45:19.525022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-20 09:45:19.525030 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.525038 | orchestrator | 2025-09-20 09:45:19.525046 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-20 09:45:19.525054 | orchestrator | Saturday 20 September 2025 09:40:45 +0000 (0:00:00.943) 0:01:53.986 **** 2025-09-20 09:45:19.525062 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.525070 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.525078 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.525085 | orchestrator | 2025-09-20 09:45:19.525093 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-20 09:45:19.525101 | orchestrator | Saturday 20 September 2025 09:40:46 +0000 (0:00:01.399) 0:01:55.386 **** 2025-09-20 09:45:19.525109 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.525117 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.525125 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.525266 | orchestrator | 2025-09-20 09:45:19.525276 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-20 09:45:19.525284 | orchestrator | Saturday 20 September 2025 09:40:48 +0000 (0:00:02.234) 0:01:57.620 **** 2025-09-20 09:45:19.525292 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.525300 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.525308 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.525316 | orchestrator | 2025-09-20 09:45:19.525324 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-20 09:45:19.525331 | orchestrator | Saturday 20 September 2025 09:40:49 +0000 (0:00:00.408) 0:01:58.029 **** 2025-09-20 09:45:19.525339 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.525347 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.525355 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.525363 | orchestrator | 2025-09-20 09:45:19.525371 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-20 09:45:19.525379 | orchestrator | Saturday 20 September 2025 09:40:49 +0000 (0:00:00.363) 0:01:58.392 **** 2025-09-20 09:45:19.525385 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.525392 | orchestrator | 2025-09-20 09:45:19.525399 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-20 09:45:19.525405 | orchestrator | Saturday 20 September 2025 09:40:50 +0000 (0:00:00.990) 0:01:59.383 **** 2025-09-20 09:45:19.525412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:45:19.525431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:45:19.525438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:45:19.525506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:45:19.525517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:45:19.525539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:45:19.525569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525626 | orchestrator | 2025-09-20 09:45:19.525633 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-20 09:45:19.525640 | orchestrator | Saturday 20 September 2025 09:40:54 +0000 (0:00:03.866) 0:02:03.250 **** 2025-09-20 09:45:19.525659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:45:19.525667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:45:19.525674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525732 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.525743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:45:19.525750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:45:19.525757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525808 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.525819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:45:19.525826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:45:19.525837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.525887 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.525894 | orchestrator | 2025-09-20 09:45:19.525901 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-20 09:45:19.525907 | orchestrator | Saturday 20 September 2025 09:40:55 +0000 (0:00:00.844) 0:02:04.094 **** 2025-09-20 09:45:19.525914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-20 09:45:19.525921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-20 09:45:19.525928 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.525939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-20 09:45:19.525946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-20 09:45:19.525952 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.525959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-20 09:45:19.525966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-20 09:45:19.525972 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.525979 | orchestrator | 2025-09-20 09:45:19.525986 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-20 09:45:19.525992 | orchestrator | Saturday 20 September 2025 09:40:56 +0000 (0:00:01.054) 0:02:05.149 **** 2025-09-20 09:45:19.525999 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.526005 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.526012 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.526226 | orchestrator | 2025-09-20 09:45:19.526234 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-20 09:45:19.526240 | orchestrator | Saturday 20 September 2025 09:40:57 +0000 (0:00:01.556) 0:02:06.705 **** 2025-09-20 09:45:19.526247 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.526254 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.526260 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.526267 | orchestrator | 2025-09-20 09:45:19.526274 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-20 09:45:19.526281 | orchestrator | Saturday 20 September 2025 09:40:59 +0000 (0:00:01.801) 0:02:08.507 **** 2025-09-20 09:45:19.526287 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.526305 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.526312 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.526319 | orchestrator | 2025-09-20 09:45:19.526326 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-20 09:45:19.526333 | orchestrator | Saturday 20 September 2025 09:41:00 +0000 (0:00:00.436) 0:02:08.944 **** 2025-09-20 09:45:19.526339 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.526346 | orchestrator | 2025-09-20 09:45:19.526353 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-20 09:45:19.526359 | orchestrator | Saturday 20 September 2025 09:41:00 +0000 (0:00:00.781) 0:02:09.725 **** 2025-09-20 09:45:19.526380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:45:19.526395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.526412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:45:19.526426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.526439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:45:19.526451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.526462 | orchestrator | 2025-09-20 09:45:19.526469 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-20 09:45:19.526476 | orchestrator | Saturday 20 September 2025 09:41:04 +0000 (0:00:03.687) 0:02:13.412 **** 2025-09-20 09:45:19.526487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:45:19.526499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.526511 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.526518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:45:19.526545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:45:19.526558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.526566 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.526588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.526601 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.526608 | orchestrator | 2025-09-20 09:45:19.526614 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-20 09:45:19.526621 | orchestrator | Saturday 20 September 2025 09:41:07 +0000 (0:00:02.927) 0:02:16.340 **** 2025-09-20 09:45:19.526628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 09:45:19.526635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 09:45:19.526642 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.526649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 09:45:19.526656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 09:45:19.526663 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.526670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 09:45:19.526688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-20 09:45:19.526700 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.526707 | orchestrator | 2025-09-20 09:45:19.526714 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-20 09:45:19.526720 | orchestrator | Saturday 20 September 2025 09:41:10 +0000 (0:00:03.259) 0:02:19.600 **** 2025-09-20 09:45:19.526727 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.526733 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.526740 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.526747 | orchestrator | 2025-09-20 09:45:19.526756 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-20 09:45:19.526763 | orchestrator | Saturday 20 September 2025 09:41:11 +0000 (0:00:01.196) 0:02:20.797 **** 2025-09-20 09:45:19.526770 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.526776 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.526783 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.526789 | orchestrator | 2025-09-20 09:45:19.526796 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-20 09:45:19.526802 | orchestrator | Saturday 20 September 2025 09:41:13 +0000 (0:00:01.950) 0:02:22.747 **** 2025-09-20 09:45:19.526809 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.526816 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.526822 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.526829 | orchestrator | 2025-09-20 09:45:19.526835 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-20 09:45:19.526842 | orchestrator | Saturday 20 September 2025 09:41:14 +0000 (0:00:00.534) 0:02:23.282 **** 2025-09-20 09:45:19.526848 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.526855 | orchestrator | 2025-09-20 09:45:19.526862 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-20 09:45:19.526868 | orchestrator | Saturday 20 September 2025 09:41:15 +0000 (0:00:00.834) 0:02:24.116 **** 2025-09-20 09:45:19.526875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 09:45:19.526883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 09:45:19.526890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 09:45:19.526901 | orchestrator | 2025-09-20 09:45:19.526907 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-20 09:45:19.526914 | orchestrator | Saturday 20 September 2025 09:41:18 +0000 (0:00:03.164) 0:02:27.281 **** 2025-09-20 09:45:19.526934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 09:45:19.526942 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.526952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 09:45:19.526959 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.526966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 09:45:19.526973 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.526979 | orchestrator | 2025-09-20 09:45:19.526986 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-20 09:45:19.526993 | orchestrator | Saturday 20 September 2025 09:41:19 +0000 (0:00:00.675) 0:02:27.956 **** 2025-09-20 09:45:19.526999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-20 09:45:19.527006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-20 09:45:19.527013 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.527019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-20 09:45:19.527026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-20 09:45:19.527038 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.527089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-20 09:45:19.527098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-20 09:45:19.527104 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.527111 | orchestrator | 2025-09-20 09:45:19.527118 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-20 09:45:19.527124 | orchestrator | Saturday 20 September 2025 09:41:19 +0000 (0:00:00.709) 0:02:28.665 **** 2025-09-20 09:45:19.527144 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.527151 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.527158 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.527165 | orchestrator | 2025-09-20 09:45:19.527171 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-20 09:45:19.527178 | orchestrator | Saturday 20 September 2025 09:41:21 +0000 (0:00:01.349) 0:02:30.015 **** 2025-09-20 09:45:19.527184 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.527191 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.527197 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.527204 | orchestrator | 2025-09-20 09:45:19.527211 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-20 09:45:19.527217 | orchestrator | Saturday 20 September 2025 09:41:23 +0000 (0:00:02.051) 0:02:32.067 **** 2025-09-20 09:45:19.527224 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.527230 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.527241 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.527248 | orchestrator | 2025-09-20 09:45:19.527254 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-20 09:45:19.527261 | orchestrator | Saturday 20 September 2025 09:41:23 +0000 (0:00:00.541) 0:02:32.609 **** 2025-09-20 09:45:19.527268 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.527274 | orchestrator | 2025-09-20 09:45:19.527281 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-20 09:45:19.527287 | orchestrator | Saturday 20 September 2025 09:41:24 +0000 (0:00:00.895) 0:02:33.504 **** 2025-09-20 09:45:19.527299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:45:19.527325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:45:19.527338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:45:19.527350 | orchestrator | 2025-09-20 09:45:19.527357 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-20 09:45:19.527364 | orchestrator | Saturday 20 September 2025 09:41:28 +0000 (0:00:03.915) 0:02:37.420 **** 2025-09-20 09:45:19.527393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:45:19.527402 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.527409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:45:19.527421 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.527446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:45:19.527455 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.527462 | orchestrator | 2025-09-20 09:45:19.527468 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-20 09:45:19.527479 | orchestrator | Saturday 20 September 2025 09:41:29 +0000 (0:00:01.237) 0:02:38.658 **** 2025-09-20 09:45:19.527487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 09:45:19.527494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 09:45:19.527501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 09:45:19.527508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 09:45:19.527515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-20 09:45:19.527522 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.527529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 09:45:19.527536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 09:45:19.527543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 09:45:19.527561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 09:45:19.527569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-20 09:45:19.527576 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.527586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 09:45:19.527593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 09:45:19.527605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-20 09:45:19.527611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-20 09:45:19.527618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-20 09:45:19.527625 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.527631 | orchestrator | 2025-09-20 09:45:19.527638 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-20 09:45:19.527645 | orchestrator | Saturday 20 September 2025 09:41:30 +0000 (0:00:01.033) 0:02:39.691 **** 2025-09-20 09:45:19.527651 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.527658 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.527665 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.527671 | orchestrator | 2025-09-20 09:45:19.527678 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-20 09:45:19.527684 | orchestrator | Saturday 20 September 2025 09:41:32 +0000 (0:00:01.337) 0:02:41.029 **** 2025-09-20 09:45:19.527691 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.527698 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.527704 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.527711 | orchestrator | 2025-09-20 09:45:19.527717 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-20 09:45:19.527724 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:02.057) 0:02:43.086 **** 2025-09-20 09:45:19.527730 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.527737 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.527743 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.527750 | orchestrator | 2025-09-20 09:45:19.527756 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-20 09:45:19.527763 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:00.452) 0:02:43.539 **** 2025-09-20 09:45:19.527770 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.527840 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.527847 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.527854 | orchestrator | 2025-09-20 09:45:19.527860 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-20 09:45:19.527867 | orchestrator | Saturday 20 September 2025 09:41:35 +0000 (0:00:00.646) 0:02:44.185 **** 2025-09-20 09:45:19.527874 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.527880 | orchestrator | 2025-09-20 09:45:19.527887 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-20 09:45:19.527894 | orchestrator | Saturday 20 September 2025 09:41:36 +0000 (0:00:00.995) 0:02:45.181 **** 2025-09-20 09:45:19.527905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:45:19.527921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:45:19.527929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:45:19.527936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:45:19.527944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:45:19.527955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:45:19.527971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:45:19.527978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:45:19.527985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:45:19.527992 | orchestrator | 2025-09-20 09:45:19.527999 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-20 09:45:19.528006 | orchestrator | Saturday 20 September 2025 09:41:40 +0000 (0:00:03.971) 0:02:49.153 **** 2025-09-20 09:45:19.528013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:45:19.528032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:45:19.528049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:45:19.528057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:45:19.528064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:45:19.528071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:45:19.528078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:45:19.528089 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.528107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:45:19.528114 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.528125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:45:19.528174 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.528182 | orchestrator | 2025-09-20 09:45:19.528189 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-20 09:45:19.528195 | orchestrator | Saturday 20 September 2025 09:41:41 +0000 (0:00:01.066) 0:02:50.220 **** 2025-09-20 09:45:19.528203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 09:45:19.528211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 09:45:19.528218 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.528225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 09:45:19.528232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 09:45:19.528239 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.528246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 09:45:19.528253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-20 09:45:19.528260 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.528266 | orchestrator | 2025-09-20 09:45:19.528273 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-20 09:45:19.528280 | orchestrator | Saturday 20 September 2025 09:41:42 +0000 (0:00:01.060) 0:02:51.280 **** 2025-09-20 09:45:19.528302 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.528308 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.528315 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.528321 | orchestrator | 2025-09-20 09:45:19.528328 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-20 09:45:19.528335 | orchestrator | Saturday 20 September 2025 09:41:43 +0000 (0:00:01.269) 0:02:52.550 **** 2025-09-20 09:45:19.528341 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.528347 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.528353 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.528359 | orchestrator | 2025-09-20 09:45:19.528365 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-20 09:45:19.528371 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:02.386) 0:02:54.936 **** 2025-09-20 09:45:19.528377 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.528384 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.528390 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.528396 | orchestrator | 2025-09-20 09:45:19.528402 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-20 09:45:19.528408 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:00.444) 0:02:55.381 **** 2025-09-20 09:45:19.528414 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.528420 | orchestrator | 2025-09-20 09:45:19.528427 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-20 09:45:19.528433 | orchestrator | Saturday 20 September 2025 09:41:47 +0000 (0:00:00.954) 0:02:56.336 **** 2025-09-20 09:45:19.528502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 09:45:19.528512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 09:45:19.528533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 09:45:19.528556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528563 | orchestrator | 2025-09-20 09:45:19.528569 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-20 09:45:19.528576 | orchestrator | Saturday 20 September 2025 09:41:51 +0000 (0:00:03.810) 0:03:00.146 **** 2025-09-20 09:45:19.528582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 09:45:19.528589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528599 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.528606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 09:45:19.528616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528622 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.528629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 09:45:19.528636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528642 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.528652 | orchestrator | 2025-09-20 09:45:19.528659 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-20 09:45:19.528665 | orchestrator | Saturday 20 September 2025 09:41:52 +0000 (0:00:00.827) 0:03:00.973 **** 2025-09-20 09:45:19.528671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-20 09:45:19.528678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-20 09:45:19.528684 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.528690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-20 09:45:19.528697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-20 09:45:19.528703 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.528709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-20 09:45:19.528715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-20 09:45:19.528721 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.528728 | orchestrator | 2025-09-20 09:45:19.528734 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-20 09:45:19.528740 | orchestrator | Saturday 20 September 2025 09:41:52 +0000 (0:00:00.859) 0:03:01.833 **** 2025-09-20 09:45:19.528746 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.528752 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.528758 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.528764 | orchestrator | 2025-09-20 09:45:19.528770 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-20 09:45:19.528788 | orchestrator | Saturday 20 September 2025 09:41:54 +0000 (0:00:01.202) 0:03:03.035 **** 2025-09-20 09:45:19.528795 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.528801 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.528807 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.528813 | orchestrator | 2025-09-20 09:45:19.528820 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-20 09:45:19.528826 | orchestrator | Saturday 20 September 2025 09:41:56 +0000 (0:00:01.963) 0:03:04.998 **** 2025-09-20 09:45:19.528844 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.528851 | orchestrator | 2025-09-20 09:45:19.528857 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-20 09:45:19.528863 | orchestrator | Saturday 20 September 2025 09:41:57 +0000 (0:00:01.259) 0:03:06.258 **** 2025-09-20 09:45:19.528873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-20 09:45:19.528885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-20 09:45:19.528925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-20 09:45:19.528948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.528997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529004 | orchestrator | 2025-09-20 09:45:19.529010 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-20 09:45:19.529016 | orchestrator | Saturday 20 September 2025 09:42:00 +0000 (0:00:03.128) 0:03:09.386 **** 2025-09-20 09:45:19.529026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-20 09:45:19.529036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529055 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-20 09:45:19.529079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-20 09:45:19.529094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.529161 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529168 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529174 | orchestrator | 2025-09-20 09:45:19.529180 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-20 09:45:19.529186 | orchestrator | Saturday 20 September 2025 09:42:01 +0000 (0:00:00.604) 0:03:09.991 **** 2025-09-20 09:45:19.529193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-20 09:45:19.529202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-20 09:45:19.529208 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-20 09:45:19.529221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-20 09:45:19.529227 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-20 09:45:19.529239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-20 09:45:19.529245 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529260 | orchestrator | 2025-09-20 09:45:19.529267 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-20 09:45:19.529273 | orchestrator | Saturday 20 September 2025 09:42:02 +0000 (0:00:01.138) 0:03:11.129 **** 2025-09-20 09:45:19.529279 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.529285 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.529291 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.529305 | orchestrator | 2025-09-20 09:45:19.529311 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-20 09:45:19.529317 | orchestrator | Saturday 20 September 2025 09:42:03 +0000 (0:00:01.265) 0:03:12.395 **** 2025-09-20 09:45:19.529324 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.529330 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.529336 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.529342 | orchestrator | 2025-09-20 09:45:19.529348 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-20 09:45:19.529354 | orchestrator | Saturday 20 September 2025 09:42:05 +0000 (0:00:02.135) 0:03:14.531 **** 2025-09-20 09:45:19.529360 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.529366 | orchestrator | 2025-09-20 09:45:19.529372 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-20 09:45:19.529379 | orchestrator | Saturday 20 September 2025 09:42:07 +0000 (0:00:01.453) 0:03:15.984 **** 2025-09-20 09:45:19.529385 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 09:45:19.529391 | orchestrator | 2025-09-20 09:45:19.529397 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-20 09:45:19.529403 | orchestrator | Saturday 20 September 2025 09:42:09 +0000 (0:00:02.770) 0:03:18.754 **** 2025-09-20 09:45:19.529423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:45:19.529441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 09:45:19.529447 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:45:19.529465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 09:45:19.529472 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:45:19.529507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 09:45:19.529514 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529530 | orchestrator | 2025-09-20 09:45:19.529536 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-20 09:45:19.529542 | orchestrator | Saturday 20 September 2025 09:42:12 +0000 (0:00:02.278) 0:03:21.033 **** 2025-09-20 09:45:19.529549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:45:19.529565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 09:45:19.529573 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:45:19.529590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 09:45:19.529601 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:45:19.529622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-20 09:45:19.529629 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529635 | orchestrator | 2025-09-20 09:45:19.529641 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-20 09:45:19.529647 | orchestrator | Saturday 20 September 2025 09:42:14 +0000 (0:00:02.573) 0:03:23.607 **** 2025-09-20 09:45:19.529654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 09:45:19.529660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 09:45:19.529671 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 09:45:19.529684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 09:45:19.529690 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 09:45:19.529711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-20 09:45:19.529717 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529723 | orchestrator | 2025-09-20 09:45:19.529730 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-20 09:45:19.529736 | orchestrator | Saturday 20 September 2025 09:42:17 +0000 (0:00:02.976) 0:03:26.583 **** 2025-09-20 09:45:19.529742 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.529748 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.529754 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.529760 | orchestrator | 2025-09-20 09:45:19.529766 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-20 09:45:19.529772 | orchestrator | Saturday 20 September 2025 09:42:19 +0000 (0:00:01.837) 0:03:28.420 **** 2025-09-20 09:45:19.529779 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529785 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529791 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529797 | orchestrator | 2025-09-20 09:45:19.529803 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-20 09:45:19.529813 | orchestrator | Saturday 20 September 2025 09:42:21 +0000 (0:00:01.529) 0:03:29.949 **** 2025-09-20 09:45:19.529820 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529826 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529832 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529838 | orchestrator | 2025-09-20 09:45:19.529844 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-20 09:45:19.529850 | orchestrator | Saturday 20 September 2025 09:42:21 +0000 (0:00:00.328) 0:03:30.278 **** 2025-09-20 09:45:19.529857 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.529863 | orchestrator | 2025-09-20 09:45:19.529869 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-20 09:45:19.529875 | orchestrator | Saturday 20 September 2025 09:42:22 +0000 (0:00:01.382) 0:03:31.660 **** 2025-09-20 09:45:19.529882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-20 09:45:19.529889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-20 09:45:19.529907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-20 09:45:19.529915 | orchestrator | 2025-09-20 09:45:19.529921 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-20 09:45:19.529930 | orchestrator | Saturday 20 September 2025 09:42:24 +0000 (0:00:01.412) 0:03:33.073 **** 2025-09-20 09:45:19.529937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-20 09:45:19.529948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-20 09:45:19.529963 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.529969 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.529976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-20 09:45:19.529982 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.529996 | orchestrator | 2025-09-20 09:45:19.530003 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-20 09:45:19.530009 | orchestrator | Saturday 20 September 2025 09:42:24 +0000 (0:00:00.396) 0:03:33.470 **** 2025-09-20 09:45:19.530035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-20 09:45:19.530043 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.530067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-20 09:45:19.530073 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.530092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-20 09:45:19.530099 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.530105 | orchestrator | 2025-09-20 09:45:19.530111 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-20 09:45:19.530118 | orchestrator | Saturday 20 September 2025 09:42:25 +0000 (0:00:00.923) 0:03:34.394 **** 2025-09-20 09:45:19.530124 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.530143 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.530149 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.530155 | orchestrator | 2025-09-20 09:45:19.530162 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-20 09:45:19.530168 | orchestrator | Saturday 20 September 2025 09:42:25 +0000 (0:00:00.447) 0:03:34.841 **** 2025-09-20 09:45:19.530174 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.530185 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.530191 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.530197 | orchestrator | 2025-09-20 09:45:19.530207 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-20 09:45:19.530213 | orchestrator | Saturday 20 September 2025 09:42:27 +0000 (0:00:01.246) 0:03:36.088 **** 2025-09-20 09:45:19.530219 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.530225 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.530231 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.530237 | orchestrator | 2025-09-20 09:45:19.530243 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-20 09:45:19.530249 | orchestrator | Saturday 20 September 2025 09:42:27 +0000 (0:00:00.281) 0:03:36.370 **** 2025-09-20 09:45:19.530255 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.530261 | orchestrator | 2025-09-20 09:45:19.530267 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-20 09:45:19.530273 | orchestrator | Saturday 20 September 2025 09:42:28 +0000 (0:00:01.306) 0:03:37.676 **** 2025-09-20 09:45:19.530280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:45:19.530287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 09:45:19.530349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:45:19.530399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 09:45:19.530470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.530477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:45:19.530565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.530626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 09:45:19.530646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.530749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530765 | orchestrator | 2025-09-20 09:45:19.530771 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-20 09:45:19.530777 | orchestrator | Saturday 20 September 2025 09:42:32 +0000 (0:00:04.209) 0:03:41.885 **** 2025-09-20 09:45:19.530784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:45:19.530791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 09:45:19.530838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.530886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.530919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:45:19.530929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:45:19.530965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.530971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.530988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.531015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 09:45:19.531025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531031 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.531038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-20 09:45:19.531055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.531151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.531157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-20 09:45:19.531284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.531309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-20 09:45:19.531327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.531342 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.531363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:45:19.531369 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.531376 | orchestrator | 2025-09-20 09:45:19.531386 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-20 09:45:19.531393 | orchestrator | Saturday 20 September 2025 09:42:34 +0000 (0:00:01.640) 0:03:43.526 **** 2025-09-20 09:45:19.531399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-20 09:45:19.531405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-20 09:45:19.531412 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.531418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-20 09:45:19.531424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-20 09:45:19.531431 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.531437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-20 09:45:19.531443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-20 09:45:19.531449 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.531455 | orchestrator | 2025-09-20 09:45:19.531461 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-20 09:45:19.531467 | orchestrator | Saturday 20 September 2025 09:42:36 +0000 (0:00:02.217) 0:03:45.743 **** 2025-09-20 09:45:19.531473 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.531480 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.531486 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.531492 | orchestrator | 2025-09-20 09:45:19.531498 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-20 09:45:19.531504 | orchestrator | Saturday 20 September 2025 09:42:38 +0000 (0:00:01.244) 0:03:46.987 **** 2025-09-20 09:45:19.531510 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.531516 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.531522 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.531528 | orchestrator | 2025-09-20 09:45:19.531534 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-20 09:45:19.531540 | orchestrator | Saturday 20 September 2025 09:42:39 +0000 (0:00:01.891) 0:03:48.879 **** 2025-09-20 09:45:19.531546 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.531552 | orchestrator | 2025-09-20 09:45:19.531558 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-20 09:45:19.531565 | orchestrator | Saturday 20 September 2025 09:42:41 +0000 (0:00:01.149) 0:03:50.029 **** 2025-09-20 09:45:19.531592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.531608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.531615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.531622 | orchestrator | 2025-09-20 09:45:19.531628 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-20 09:45:19.531634 | orchestrator | Saturday 20 September 2025 09:42:44 +0000 (0:00:03.195) 0:03:53.225 **** 2025-09-20 09:45:19.531640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.531647 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.531668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.531679 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.531688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.531695 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.531701 | orchestrator | 2025-09-20 09:45:19.531707 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-20 09:45:19.531714 | orchestrator | Saturday 20 September 2025 09:42:44 +0000 (0:00:00.553) 0:03:53.778 **** 2025-09-20 09:45:19.531720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 09:45:19.531726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 09:45:19.531733 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.531740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 09:45:19.531746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 09:45:19.531752 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.531758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 09:45:19.531765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-20 09:45:19.531771 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.531777 | orchestrator | 2025-09-20 09:45:19.531783 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-20 09:45:19.531789 | orchestrator | Saturday 20 September 2025 09:42:45 +0000 (0:00:00.797) 0:03:54.575 **** 2025-09-20 09:45:19.531795 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.531801 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.531808 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.531814 | orchestrator | 2025-09-20 09:45:19.531820 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-20 09:45:19.531826 | orchestrator | Saturday 20 September 2025 09:42:46 +0000 (0:00:01.281) 0:03:55.857 **** 2025-09-20 09:45:19.531832 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.531838 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.531844 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.531850 | orchestrator | 2025-09-20 09:45:19.531856 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-20 09:45:19.531866 | orchestrator | Saturday 20 September 2025 09:42:49 +0000 (0:00:02.154) 0:03:58.011 **** 2025-09-20 09:45:19.531873 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.531879 | orchestrator | 2025-09-20 09:45:19.531885 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-20 09:45:19.531891 | orchestrator | Saturday 20 September 2025 09:42:50 +0000 (0:00:01.543) 0:03:59.555 **** 2025-09-20 09:45:19.531913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.531922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.531936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.531982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.531995 | orchestrator | 2025-09-20 09:45:19.532001 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-20 09:45:19.532011 | orchestrator | Saturday 20 September 2025 09:42:54 +0000 (0:00:04.205) 0:04:03.760 **** 2025-09-20 09:45:19.532030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.532038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.532049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.532056 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.532069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.532079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.532086 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.532115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.532121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.532127 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532207 | orchestrator | 2025-09-20 09:45:19.532213 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-20 09:45:19.532220 | orchestrator | Saturday 20 September 2025 09:42:55 +0000 (0:00:01.029) 0:04:04.790 **** 2025-09-20 09:45:19.532226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532257 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532300 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-20 09:45:19.532356 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532362 | orchestrator | 2025-09-20 09:45:19.532367 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-20 09:45:19.532373 | orchestrator | Saturday 20 September 2025 09:42:57 +0000 (0:00:01.322) 0:04:06.113 **** 2025-09-20 09:45:19.532378 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.532383 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.532389 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.532394 | orchestrator | 2025-09-20 09:45:19.532400 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-20 09:45:19.532405 | orchestrator | Saturday 20 September 2025 09:42:58 +0000 (0:00:01.291) 0:04:07.405 **** 2025-09-20 09:45:19.532410 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.532416 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.532421 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.532426 | orchestrator | 2025-09-20 09:45:19.532432 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-20 09:45:19.532441 | orchestrator | Saturday 20 September 2025 09:43:00 +0000 (0:00:02.083) 0:04:09.488 **** 2025-09-20 09:45:19.532447 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.532452 | orchestrator | 2025-09-20 09:45:19.532457 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-20 09:45:19.532463 | orchestrator | Saturday 20 September 2025 09:43:02 +0000 (0:00:01.576) 0:04:11.064 **** 2025-09-20 09:45:19.532468 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-20 09:45:19.532474 | orchestrator | 2025-09-20 09:45:19.532479 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-20 09:45:19.532484 | orchestrator | Saturday 20 September 2025 09:43:03 +0000 (0:00:00.889) 0:04:11.954 **** 2025-09-20 09:45:19.532490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-20 09:45:19.532496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-20 09:45:19.532502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-20 09:45:19.532507 | orchestrator | 2025-09-20 09:45:19.532513 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-20 09:45:19.532519 | orchestrator | Saturday 20 September 2025 09:43:07 +0000 (0:00:04.468) 0:04:16.422 **** 2025-09-20 09:45:19.532535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532541 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532556 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532570 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532576 | orchestrator | 2025-09-20 09:45:19.532581 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-20 09:45:19.532586 | orchestrator | Saturday 20 September 2025 09:43:09 +0000 (0:00:01.530) 0:04:17.952 **** 2025-09-20 09:45:19.532592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 09:45:19.532598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 09:45:19.532604 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 09:45:19.532615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 09:45:19.532620 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 09:45:19.532632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-20 09:45:19.532637 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532643 | orchestrator | 2025-09-20 09:45:19.532648 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-20 09:45:19.532653 | orchestrator | Saturday 20 September 2025 09:43:10 +0000 (0:00:01.617) 0:04:19.569 **** 2025-09-20 09:45:19.532659 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.532664 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.532669 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.532675 | orchestrator | 2025-09-20 09:45:19.532680 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-20 09:45:19.532685 | orchestrator | Saturday 20 September 2025 09:43:13 +0000 (0:00:02.396) 0:04:21.966 **** 2025-09-20 09:45:19.532691 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.532696 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.532701 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.532706 | orchestrator | 2025-09-20 09:45:19.532712 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-20 09:45:19.532717 | orchestrator | Saturday 20 September 2025 09:43:16 +0000 (0:00:03.111) 0:04:25.078 **** 2025-09-20 09:45:19.532733 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-20 09:45:19.532739 | orchestrator | 2025-09-20 09:45:19.532744 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-20 09:45:19.532753 | orchestrator | Saturday 20 September 2025 09:43:17 +0000 (0:00:01.477) 0:04:26.555 **** 2025-09-20 09:45:19.532761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532767 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532778 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532790 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532795 | orchestrator | 2025-09-20 09:45:19.532800 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-20 09:45:19.532806 | orchestrator | Saturday 20 September 2025 09:43:18 +0000 (0:00:01.307) 0:04:27.863 **** 2025-09-20 09:45:19.532811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532817 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532828 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-20 09:45:19.532842 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532848 | orchestrator | 2025-09-20 09:45:19.532853 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-20 09:45:19.532858 | orchestrator | Saturday 20 September 2025 09:43:20 +0000 (0:00:01.337) 0:04:29.201 **** 2025-09-20 09:45:19.532864 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.532869 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.532874 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.532879 | orchestrator | 2025-09-20 09:45:19.532895 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-20 09:45:19.532901 | orchestrator | Saturday 20 September 2025 09:43:22 +0000 (0:00:01.887) 0:04:31.088 **** 2025-09-20 09:45:19.532906 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.532912 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.532917 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.532923 | orchestrator | 2025-09-20 09:45:19.532928 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-20 09:45:19.532933 | orchestrator | Saturday 20 September 2025 09:43:24 +0000 (0:00:02.373) 0:04:33.461 **** 2025-09-20 09:45:19.532939 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.532944 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.532949 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.532955 | orchestrator | 2025-09-20 09:45:19.532960 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-20 09:45:19.532969 | orchestrator | Saturday 20 September 2025 09:43:27 +0000 (0:00:03.030) 0:04:36.492 **** 2025-09-20 09:45:19.532974 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-20 09:45:19.532980 | orchestrator | 2025-09-20 09:45:19.532985 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-20 09:45:19.532990 | orchestrator | Saturday 20 September 2025 09:43:28 +0000 (0:00:00.877) 0:04:37.369 **** 2025-09-20 09:45:19.532996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 09:45:19.533002 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 09:45:19.533013 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 09:45:19.533024 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533029 | orchestrator | 2025-09-20 09:45:19.533035 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-20 09:45:19.533046 | orchestrator | Saturday 20 September 2025 09:43:29 +0000 (0:00:01.336) 0:04:38.706 **** 2025-09-20 09:45:19.533051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 09:45:19.533057 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 09:45:19.533068 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-20 09:45:19.533090 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533095 | orchestrator | 2025-09-20 09:45:19.533101 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-20 09:45:19.533109 | orchestrator | Saturday 20 September 2025 09:43:31 +0000 (0:00:01.392) 0:04:40.098 **** 2025-09-20 09:45:19.533114 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533120 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533125 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533141 | orchestrator | 2025-09-20 09:45:19.533147 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-20 09:45:19.533152 | orchestrator | Saturday 20 September 2025 09:43:32 +0000 (0:00:01.591) 0:04:41.689 **** 2025-09-20 09:45:19.533158 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.533163 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.533169 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.533174 | orchestrator | 2025-09-20 09:45:19.533179 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-20 09:45:19.533185 | orchestrator | Saturday 20 September 2025 09:43:35 +0000 (0:00:02.358) 0:04:44.048 **** 2025-09-20 09:45:19.533190 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.533195 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.533201 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.533206 | orchestrator | 2025-09-20 09:45:19.533211 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-20 09:45:19.533217 | orchestrator | Saturday 20 September 2025 09:43:38 +0000 (0:00:03.299) 0:04:47.347 **** 2025-09-20 09:45:19.533222 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.533227 | orchestrator | 2025-09-20 09:45:19.533233 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-20 09:45:19.533238 | orchestrator | Saturday 20 September 2025 09:43:40 +0000 (0:00:01.684) 0:04:49.031 **** 2025-09-20 09:45:19.533244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.533255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 09:45:19.533261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.533301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.533310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 09:45:19.533316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.533346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.533352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 09:45:19.533362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.533379 | orchestrator | 2025-09-20 09:45:19.533384 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-20 09:45:19.533390 | orchestrator | Saturday 20 September 2025 09:43:43 +0000 (0:00:03.577) 0:04:52.608 **** 2025-09-20 09:45:19.533406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.533415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 09:45:19.533421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.533441 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.533463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 09:45:19.533469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.533493 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.533505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 09:45:19.533510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 09:45:19.533536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:45:19.533546 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533551 | orchestrator | 2025-09-20 09:45:19.533557 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-20 09:45:19.533562 | orchestrator | Saturday 20 September 2025 09:43:44 +0000 (0:00:00.741) 0:04:53.349 **** 2025-09-20 09:45:19.533568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 09:45:19.533573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 09:45:19.533579 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 09:45:19.533590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 09:45:19.533595 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 09:45:19.533606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-20 09:45:19.533611 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533617 | orchestrator | 2025-09-20 09:45:19.533622 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-20 09:45:19.533627 | orchestrator | Saturday 20 September 2025 09:43:45 +0000 (0:00:01.553) 0:04:54.903 **** 2025-09-20 09:45:19.533632 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.533638 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.533643 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.533648 | orchestrator | 2025-09-20 09:45:19.533654 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-20 09:45:19.533659 | orchestrator | Saturday 20 September 2025 09:43:47 +0000 (0:00:01.585) 0:04:56.488 **** 2025-09-20 09:45:19.533664 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.533670 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.533675 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.533680 | orchestrator | 2025-09-20 09:45:19.533686 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-20 09:45:19.533691 | orchestrator | Saturday 20 September 2025 09:43:49 +0000 (0:00:02.205) 0:04:58.694 **** 2025-09-20 09:45:19.533697 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.533702 | orchestrator | 2025-09-20 09:45:19.533707 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-20 09:45:19.533713 | orchestrator | Saturday 20 September 2025 09:43:51 +0000 (0:00:01.407) 0:05:00.101 **** 2025-09-20 09:45:19.533729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:45:19.533743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:45:19.533749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:45:19.533755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:45:19.533772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:45:19.533786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:45:19.533792 | orchestrator | 2025-09-20 09:45:19.533797 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-20 09:45:19.533803 | orchestrator | Saturday 20 September 2025 09:43:56 +0000 (0:00:05.183) 0:05:05.285 **** 2025-09-20 09:45:19.533808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:45:19.533814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:45:19.533820 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:45:19.533850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:45:19.533856 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:45:19.533868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:45:19.533874 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533879 | orchestrator | 2025-09-20 09:45:19.533885 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-20 09:45:19.533890 | orchestrator | Saturday 20 September 2025 09:43:57 +0000 (0:00:00.660) 0:05:05.945 **** 2025-09-20 09:45:19.533896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-20 09:45:19.533901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 09:45:19.533912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 09:45:19.533917 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.533923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-20 09:45:19.533938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 09:45:19.533945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 09:45:19.533950 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.533959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-20 09:45:19.533964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 09:45:19.533970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-20 09:45:19.533976 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.533981 | orchestrator | 2025-09-20 09:45:19.533986 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-20 09:45:19.533992 | orchestrator | Saturday 20 September 2025 09:43:57 +0000 (0:00:00.930) 0:05:06.876 **** 2025-09-20 09:45:19.533997 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534002 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534008 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534034 | orchestrator | 2025-09-20 09:45:19.534042 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-20 09:45:19.534047 | orchestrator | Saturday 20 September 2025 09:43:58 +0000 (0:00:00.909) 0:05:07.786 **** 2025-09-20 09:45:19.534052 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534058 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534063 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534068 | orchestrator | 2025-09-20 09:45:19.534073 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-20 09:45:19.534079 | orchestrator | Saturday 20 September 2025 09:44:00 +0000 (0:00:01.341) 0:05:09.127 **** 2025-09-20 09:45:19.534084 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.534089 | orchestrator | 2025-09-20 09:45:19.534095 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-20 09:45:19.534100 | orchestrator | Saturday 20 September 2025 09:44:01 +0000 (0:00:01.437) 0:05:10.565 **** 2025-09-20 09:45:19.534106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:45:19.534117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:45:19.534123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:45:19.534197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:45:19.534203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:45:19.534237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:45:19.534243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:45:19.534274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 09:45:19.534283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:45:19.534310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 09:45:19.534319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:45:19.534349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 09:45:19.534355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534375 | orchestrator | 2025-09-20 09:45:19.534380 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-20 09:45:19.534386 | orchestrator | Saturday 20 September 2025 09:44:06 +0000 (0:00:04.474) 0:05:15.039 **** 2025-09-20 09:45:19.534394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 09:45:19.534400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:45:19.534410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 09:45:19.534439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 09:45:19.534445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534469 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 09:45:19.534480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:45:19.534489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 09:45:19.534547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 09:45:19.534558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534582 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 09:45:19.534601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:45:19.534606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 09:45:19.534636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-20 09:45:19.534649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:45:19.534661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:45:19.534666 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534672 | orchestrator | 2025-09-20 09:45:19.534677 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-20 09:45:19.534683 | orchestrator | Saturday 20 September 2025 09:44:07 +0000 (0:00:01.379) 0:05:16.419 **** 2025-09-20 09:45:19.534688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-20 09:45:19.534695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-20 09:45:19.534700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-20 09:45:19.534706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 09:45:19.534711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-20 09:45:19.534719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 09:45:19.534725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 09:45:19.534743 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 09:45:19.534756 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-20 09:45:19.534765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-20 09:45:19.534770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 09:45:19.534775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-20 09:45:19.534780 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534785 | orchestrator | 2025-09-20 09:45:19.534790 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-20 09:45:19.534795 | orchestrator | Saturday 20 September 2025 09:44:08 +0000 (0:00:01.088) 0:05:17.507 **** 2025-09-20 09:45:19.534800 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534805 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534809 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534814 | orchestrator | 2025-09-20 09:45:19.534819 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-20 09:45:19.534824 | orchestrator | Saturday 20 September 2025 09:44:09 +0000 (0:00:00.476) 0:05:17.983 **** 2025-09-20 09:45:19.534828 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534833 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534838 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534842 | orchestrator | 2025-09-20 09:45:19.534847 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-20 09:45:19.534852 | orchestrator | Saturday 20 September 2025 09:44:10 +0000 (0:00:01.510) 0:05:19.494 **** 2025-09-20 09:45:19.534857 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.534861 | orchestrator | 2025-09-20 09:45:19.534866 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-20 09:45:19.534871 | orchestrator | Saturday 20 September 2025 09:44:12 +0000 (0:00:01.764) 0:05:21.258 **** 2025-09-20 09:45:19.534876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:45:19.534898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:45:19.534904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-20 09:45:19.534910 | orchestrator | 2025-09-20 09:45:19.534915 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-20 09:45:19.534920 | orchestrator | Saturday 20 September 2025 09:44:14 +0000 (0:00:02.478) 0:05:23.736 **** 2025-09-20 09:45:19.534925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-20 09:45:19.534930 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-20 09:45:19.534944 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-20 09:45:19.534960 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.534965 | orchestrator | 2025-09-20 09:45:19.534970 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-20 09:45:19.534975 | orchestrator | Saturday 20 September 2025 09:44:15 +0000 (0:00:00.441) 0:05:24.178 **** 2025-09-20 09:45:19.534980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-20 09:45:19.534985 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.534989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-20 09:45:19.534994 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.534999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-20 09:45:19.535004 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535009 | orchestrator | 2025-09-20 09:45:19.535016 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-20 09:45:19.535024 | orchestrator | Saturday 20 September 2025 09:44:16 +0000 (0:00:01.035) 0:05:25.214 **** 2025-09-20 09:45:19.535031 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535039 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535047 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535055 | orchestrator | 2025-09-20 09:45:19.535063 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-20 09:45:19.535071 | orchestrator | Saturday 20 September 2025 09:44:16 +0000 (0:00:00.442) 0:05:25.657 **** 2025-09-20 09:45:19.535077 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535081 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535086 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535091 | orchestrator | 2025-09-20 09:45:19.535095 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-20 09:45:19.535100 | orchestrator | Saturday 20 September 2025 09:44:18 +0000 (0:00:01.397) 0:05:27.055 **** 2025-09-20 09:45:19.535105 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:45:19.535110 | orchestrator | 2025-09-20 09:45:19.535114 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-20 09:45:19.535123 | orchestrator | Saturday 20 September 2025 09:44:19 +0000 (0:00:01.836) 0:05:28.892 **** 2025-09-20 09:45:19.535143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.535153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.535161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.535166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.535172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.535180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-20 09:45:19.535185 | orchestrator | 2025-09-20 09:45:19.535193 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-20 09:45:19.535198 | orchestrator | Saturday 20 September 2025 09:44:26 +0000 (0:00:06.355) 0:05:35.247 **** 2025-09-20 09:45:19.535207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.535212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.535217 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.535231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.535236 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.535252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-20 09:45:19.535257 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535262 | orchestrator | 2025-09-20 09:45:19.535267 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-20 09:45:19.535272 | orchestrator | Saturday 20 September 2025 09:44:26 +0000 (0:00:00.639) 0:05:35.886 **** 2025-09-20 09:45:19.535280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535300 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535324 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-20 09:45:19.535352 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535356 | orchestrator | 2025-09-20 09:45:19.535364 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-20 09:45:19.535369 | orchestrator | Saturday 20 September 2025 09:44:28 +0000 (0:00:01.675) 0:05:37.562 **** 2025-09-20 09:45:19.535373 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.535378 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.535383 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.535388 | orchestrator | 2025-09-20 09:45:19.535393 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-20 09:45:19.535397 | orchestrator | Saturday 20 September 2025 09:44:30 +0000 (0:00:01.389) 0:05:38.951 **** 2025-09-20 09:45:19.535402 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.535407 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.535415 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.535420 | orchestrator | 2025-09-20 09:45:19.535425 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-20 09:45:19.535429 | orchestrator | Saturday 20 September 2025 09:44:32 +0000 (0:00:02.297) 0:05:41.248 **** 2025-09-20 09:45:19.535434 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535439 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535444 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535448 | orchestrator | 2025-09-20 09:45:19.535453 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-20 09:45:19.535458 | orchestrator | Saturday 20 September 2025 09:44:32 +0000 (0:00:00.341) 0:05:41.590 **** 2025-09-20 09:45:19.535463 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535467 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535472 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535477 | orchestrator | 2025-09-20 09:45:19.535482 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-20 09:45:19.535487 | orchestrator | Saturday 20 September 2025 09:44:33 +0000 (0:00:00.319) 0:05:41.910 **** 2025-09-20 09:45:19.535491 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535496 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535501 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535506 | orchestrator | 2025-09-20 09:45:19.535510 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-20 09:45:19.535515 | orchestrator | Saturday 20 September 2025 09:44:33 +0000 (0:00:00.654) 0:05:42.564 **** 2025-09-20 09:45:19.535520 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535525 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535529 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535534 | orchestrator | 2025-09-20 09:45:19.535539 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-20 09:45:19.535544 | orchestrator | Saturday 20 September 2025 09:44:33 +0000 (0:00:00.338) 0:05:42.902 **** 2025-09-20 09:45:19.535548 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535553 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535558 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535563 | orchestrator | 2025-09-20 09:45:19.535567 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-20 09:45:19.535572 | orchestrator | Saturday 20 September 2025 09:44:34 +0000 (0:00:00.309) 0:05:43.212 **** 2025-09-20 09:45:19.535577 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535582 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535586 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535591 | orchestrator | 2025-09-20 09:45:19.535596 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-20 09:45:19.535601 | orchestrator | Saturday 20 September 2025 09:44:35 +0000 (0:00:00.919) 0:05:44.132 **** 2025-09-20 09:45:19.535605 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535610 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535615 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535620 | orchestrator | 2025-09-20 09:45:19.535624 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-20 09:45:19.535629 | orchestrator | Saturday 20 September 2025 09:44:35 +0000 (0:00:00.747) 0:05:44.879 **** 2025-09-20 09:45:19.535634 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535639 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535643 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535648 | orchestrator | 2025-09-20 09:45:19.535653 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-20 09:45:19.535658 | orchestrator | Saturday 20 September 2025 09:44:36 +0000 (0:00:00.340) 0:05:45.220 **** 2025-09-20 09:45:19.535662 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535667 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535672 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535680 | orchestrator | 2025-09-20 09:45:19.535685 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-20 09:45:19.535689 | orchestrator | Saturday 20 September 2025 09:44:37 +0000 (0:00:00.939) 0:05:46.160 **** 2025-09-20 09:45:19.535694 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535699 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535704 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535708 | orchestrator | 2025-09-20 09:45:19.535713 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-20 09:45:19.535718 | orchestrator | Saturday 20 September 2025 09:44:38 +0000 (0:00:01.263) 0:05:47.423 **** 2025-09-20 09:45:19.535723 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535727 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535735 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535740 | orchestrator | 2025-09-20 09:45:19.535745 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-20 09:45:19.535750 | orchestrator | Saturday 20 September 2025 09:44:39 +0000 (0:00:00.944) 0:05:48.368 **** 2025-09-20 09:45:19.535754 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.535759 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.535764 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.535769 | orchestrator | 2025-09-20 09:45:19.535774 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-20 09:45:19.535778 | orchestrator | Saturday 20 September 2025 09:44:47 +0000 (0:00:08.459) 0:05:56.828 **** 2025-09-20 09:45:19.535783 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535788 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535793 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535797 | orchestrator | 2025-09-20 09:45:19.535802 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-20 09:45:19.535809 | orchestrator | Saturday 20 September 2025 09:44:48 +0000 (0:00:00.742) 0:05:57.570 **** 2025-09-20 09:45:19.535814 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.535819 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.535824 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.535829 | orchestrator | 2025-09-20 09:45:19.535833 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-20 09:45:19.535838 | orchestrator | Saturday 20 September 2025 09:44:57 +0000 (0:00:08.643) 0:06:06.214 **** 2025-09-20 09:45:19.535843 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.535848 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.535852 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.535857 | orchestrator | 2025-09-20 09:45:19.535862 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-20 09:45:19.535867 | orchestrator | Saturday 20 September 2025 09:45:02 +0000 (0:00:05.060) 0:06:11.274 **** 2025-09-20 09:45:19.535872 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:45:19.535876 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:45:19.535881 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:45:19.535886 | orchestrator | 2025-09-20 09:45:19.535891 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-20 09:45:19.535895 | orchestrator | Saturday 20 September 2025 09:45:12 +0000 (0:00:09.746) 0:06:21.020 **** 2025-09-20 09:45:19.535900 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535905 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535910 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535914 | orchestrator | 2025-09-20 09:45:19.535919 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-20 09:45:19.535924 | orchestrator | Saturday 20 September 2025 09:45:12 +0000 (0:00:00.430) 0:06:21.451 **** 2025-09-20 09:45:19.535929 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535934 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535938 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535943 | orchestrator | 2025-09-20 09:45:19.535948 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-20 09:45:19.535956 | orchestrator | Saturday 20 September 2025 09:45:12 +0000 (0:00:00.383) 0:06:21.835 **** 2025-09-20 09:45:19.535960 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535965 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535970 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.535974 | orchestrator | 2025-09-20 09:45:19.535979 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-20 09:45:19.535984 | orchestrator | Saturday 20 September 2025 09:45:13 +0000 (0:00:00.760) 0:06:22.595 **** 2025-09-20 09:45:19.535989 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.535994 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.535998 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.536003 | orchestrator | 2025-09-20 09:45:19.536008 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-20 09:45:19.536013 | orchestrator | Saturday 20 September 2025 09:45:14 +0000 (0:00:00.366) 0:06:22.962 **** 2025-09-20 09:45:19.536017 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.536022 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.536027 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.536032 | orchestrator | 2025-09-20 09:45:19.536036 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-20 09:45:19.536041 | orchestrator | Saturday 20 September 2025 09:45:14 +0000 (0:00:00.356) 0:06:23.319 **** 2025-09-20 09:45:19.536046 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:45:19.536051 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:45:19.536055 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:45:19.536060 | orchestrator | 2025-09-20 09:45:19.536065 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-20 09:45:19.536070 | orchestrator | Saturday 20 September 2025 09:45:14 +0000 (0:00:00.380) 0:06:23.700 **** 2025-09-20 09:45:19.536074 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.536079 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.536084 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.536089 | orchestrator | 2025-09-20 09:45:19.536094 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-20 09:45:19.536098 | orchestrator | Saturday 20 September 2025 09:45:16 +0000 (0:00:01.511) 0:06:25.212 **** 2025-09-20 09:45:19.536103 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:45:19.536108 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:45:19.536113 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:45:19.536117 | orchestrator | 2025-09-20 09:45:19.536122 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:45:19.536127 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-20 09:45:19.536144 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-20 09:45:19.536149 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-20 09:45:19.536153 | orchestrator | 2025-09-20 09:45:19.536158 | orchestrator | 2025-09-20 09:45:19.536166 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:45:19.536171 | orchestrator | Saturday 20 September 2025 09:45:17 +0000 (0:00:00.862) 0:06:26.074 **** 2025-09-20 09:45:19.536176 | orchestrator | =============================================================================== 2025-09-20 09:45:19.536180 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.75s 2025-09-20 09:45:19.536185 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.64s 2025-09-20 09:45:19.536190 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.46s 2025-09-20 09:45:19.536194 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.49s 2025-09-20 09:45:19.536203 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.36s 2025-09-20 09:45:19.536210 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.15s 2025-09-20 09:45:19.536215 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.18s 2025-09-20 09:45:19.536220 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 5.06s 2025-09-20 09:45:19.536224 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2025-09-20 09:45:19.536229 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.47s 2025-09-20 09:45:19.536234 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.28s 2025-09-20 09:45:19.536238 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.21s 2025-09-20 09:45:19.536243 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.21s 2025-09-20 09:45:19.536248 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.97s 2025-09-20 09:45:19.536253 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.92s 2025-09-20 09:45:19.536257 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.91s 2025-09-20 09:45:19.536262 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.87s 2025-09-20 09:45:19.536267 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 3.85s 2025-09-20 09:45:19.536272 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.81s 2025-09-20 09:45:19.536276 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.69s 2025-09-20 09:45:19.536281 | orchestrator | 2025-09-20 09:45:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:22.572706 | orchestrator | 2025-09-20 09:45:22 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:22.574580 | orchestrator | 2025-09-20 09:45:22 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:22.575356 | orchestrator | 2025-09-20 09:45:22 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:22.575430 | orchestrator | 2025-09-20 09:45:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:25.610264 | orchestrator | 2025-09-20 09:45:25 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:25.612443 | orchestrator | 2025-09-20 09:45:25 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:25.616031 | orchestrator | 2025-09-20 09:45:25 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:25.616098 | orchestrator | 2025-09-20 09:45:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:28.654869 | orchestrator | 2025-09-20 09:45:28 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:28.655471 | orchestrator | 2025-09-20 09:45:28 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:28.656660 | orchestrator | 2025-09-20 09:45:28 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:28.656978 | orchestrator | 2025-09-20 09:45:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:31.692566 | orchestrator | 2025-09-20 09:45:31 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:31.695962 | orchestrator | 2025-09-20 09:45:31 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:31.698118 | orchestrator | 2025-09-20 09:45:31 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:31.698657 | orchestrator | 2025-09-20 09:45:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:34.784833 | orchestrator | 2025-09-20 09:45:34 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:34.787333 | orchestrator | 2025-09-20 09:45:34 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:34.787389 | orchestrator | 2025-09-20 09:45:34 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:34.787452 | orchestrator | 2025-09-20 09:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:37.821851 | orchestrator | 2025-09-20 09:45:37 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:37.822012 | orchestrator | 2025-09-20 09:45:37 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:37.822665 | orchestrator | 2025-09-20 09:45:37 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:37.822689 | orchestrator | 2025-09-20 09:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:40.867057 | orchestrator | 2025-09-20 09:45:40 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:40.867558 | orchestrator | 2025-09-20 09:45:40 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:40.868409 | orchestrator | 2025-09-20 09:45:40 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:40.868434 | orchestrator | 2025-09-20 09:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:43.907557 | orchestrator | 2025-09-20 09:45:43 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:43.907760 | orchestrator | 2025-09-20 09:45:43 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:43.908802 | orchestrator | 2025-09-20 09:45:43 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:43.908838 | orchestrator | 2025-09-20 09:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:46.937989 | orchestrator | 2025-09-20 09:45:46 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:46.938962 | orchestrator | 2025-09-20 09:45:46 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:46.940495 | orchestrator | 2025-09-20 09:45:46 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:46.940521 | orchestrator | 2025-09-20 09:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:49.985999 | orchestrator | 2025-09-20 09:45:49 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:49.988918 | orchestrator | 2025-09-20 09:45:49 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:49.990487 | orchestrator | 2025-09-20 09:45:49 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:49.990526 | orchestrator | 2025-09-20 09:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:53.036061 | orchestrator | 2025-09-20 09:45:53 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:53.036830 | orchestrator | 2025-09-20 09:45:53 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:53.038664 | orchestrator | 2025-09-20 09:45:53 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:53.038688 | orchestrator | 2025-09-20 09:45:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:56.089919 | orchestrator | 2025-09-20 09:45:56 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:56.092327 | orchestrator | 2025-09-20 09:45:56 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:56.094959 | orchestrator | 2025-09-20 09:45:56 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:56.094986 | orchestrator | 2025-09-20 09:45:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:45:59.142084 | orchestrator | 2025-09-20 09:45:59 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:45:59.143735 | orchestrator | 2025-09-20 09:45:59 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:45:59.146196 | orchestrator | 2025-09-20 09:45:59 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:45:59.146599 | orchestrator | 2025-09-20 09:45:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:02.193043 | orchestrator | 2025-09-20 09:46:02 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:02.194267 | orchestrator | 2025-09-20 09:46:02 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:02.196012 | orchestrator | 2025-09-20 09:46:02 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:02.196036 | orchestrator | 2025-09-20 09:46:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:05.237293 | orchestrator | 2025-09-20 09:46:05 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:05.237517 | orchestrator | 2025-09-20 09:46:05 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:05.238367 | orchestrator | 2025-09-20 09:46:05 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:05.238490 | orchestrator | 2025-09-20 09:46:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:08.270409 | orchestrator | 2025-09-20 09:46:08 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:08.271317 | orchestrator | 2025-09-20 09:46:08 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:08.272777 | orchestrator | 2025-09-20 09:46:08 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:08.273187 | orchestrator | 2025-09-20 09:46:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:11.312680 | orchestrator | 2025-09-20 09:46:11 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:11.313260 | orchestrator | 2025-09-20 09:46:11 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:11.314957 | orchestrator | 2025-09-20 09:46:11 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:11.315261 | orchestrator | 2025-09-20 09:46:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:14.355661 | orchestrator | 2025-09-20 09:46:14 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:14.358709 | orchestrator | 2025-09-20 09:46:14 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:14.360428 | orchestrator | 2025-09-20 09:46:14 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:14.360728 | orchestrator | 2025-09-20 09:46:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:17.406323 | orchestrator | 2025-09-20 09:46:17 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:17.406804 | orchestrator | 2025-09-20 09:46:17 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:17.408327 | orchestrator | 2025-09-20 09:46:17 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:17.408352 | orchestrator | 2025-09-20 09:46:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:20.455588 | orchestrator | 2025-09-20 09:46:20 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:20.456539 | orchestrator | 2025-09-20 09:46:20 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:20.457300 | orchestrator | 2025-09-20 09:46:20 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:20.457323 | orchestrator | 2025-09-20 09:46:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:23.510434 | orchestrator | 2025-09-20 09:46:23 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:23.512737 | orchestrator | 2025-09-20 09:46:23 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:23.515860 | orchestrator | 2025-09-20 09:46:23 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:23.515888 | orchestrator | 2025-09-20 09:46:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:26.561327 | orchestrator | 2025-09-20 09:46:26 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:26.561546 | orchestrator | 2025-09-20 09:46:26 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:26.562881 | orchestrator | 2025-09-20 09:46:26 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:26.562907 | orchestrator | 2025-09-20 09:46:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:29.612808 | orchestrator | 2025-09-20 09:46:29 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:29.615821 | orchestrator | 2025-09-20 09:46:29 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:29.618185 | orchestrator | 2025-09-20 09:46:29 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:29.618517 | orchestrator | 2025-09-20 09:46:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:32.664126 | orchestrator | 2025-09-20 09:46:32 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:32.665902 | orchestrator | 2025-09-20 09:46:32 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:32.667663 | orchestrator | 2025-09-20 09:46:32 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:32.667713 | orchestrator | 2025-09-20 09:46:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:35.713305 | orchestrator | 2025-09-20 09:46:35 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:35.715328 | orchestrator | 2025-09-20 09:46:35 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:35.717453 | orchestrator | 2025-09-20 09:46:35 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:35.717740 | orchestrator | 2025-09-20 09:46:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:38.760330 | orchestrator | 2025-09-20 09:46:38 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:38.760429 | orchestrator | 2025-09-20 09:46:38 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:38.760577 | orchestrator | 2025-09-20 09:46:38 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:38.760597 | orchestrator | 2025-09-20 09:46:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:41.804257 | orchestrator | 2025-09-20 09:46:41 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:41.806796 | orchestrator | 2025-09-20 09:46:41 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:41.808464 | orchestrator | 2025-09-20 09:46:41 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:41.809136 | orchestrator | 2025-09-20 09:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:44.866377 | orchestrator | 2025-09-20 09:46:44 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:44.866556 | orchestrator | 2025-09-20 09:46:44 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:44.867926 | orchestrator | 2025-09-20 09:46:44 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:44.867952 | orchestrator | 2025-09-20 09:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:47.914542 | orchestrator | 2025-09-20 09:46:47 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:47.916865 | orchestrator | 2025-09-20 09:46:47 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:47.919359 | orchestrator | 2025-09-20 09:46:47 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:47.919391 | orchestrator | 2025-09-20 09:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:50.966214 | orchestrator | 2025-09-20 09:46:50 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:50.967818 | orchestrator | 2025-09-20 09:46:50 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:50.970350 | orchestrator | 2025-09-20 09:46:50 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:50.970692 | orchestrator | 2025-09-20 09:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:54.023762 | orchestrator | 2025-09-20 09:46:54 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:54.025689 | orchestrator | 2025-09-20 09:46:54 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:54.027397 | orchestrator | 2025-09-20 09:46:54 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:54.027426 | orchestrator | 2025-09-20 09:46:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:46:57.075616 | orchestrator | 2025-09-20 09:46:57 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:46:57.077195 | orchestrator | 2025-09-20 09:46:57 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:46:57.080967 | orchestrator | 2025-09-20 09:46:57 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:46:57.081206 | orchestrator | 2025-09-20 09:46:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:00.110122 | orchestrator | 2025-09-20 09:47:00 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:00.111703 | orchestrator | 2025-09-20 09:47:00 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:00.112964 | orchestrator | 2025-09-20 09:47:00 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:00.112999 | orchestrator | 2025-09-20 09:47:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:03.142897 | orchestrator | 2025-09-20 09:47:03 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:03.144019 | orchestrator | 2025-09-20 09:47:03 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:03.145504 | orchestrator | 2025-09-20 09:47:03 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:03.145603 | orchestrator | 2025-09-20 09:47:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:06.187497 | orchestrator | 2025-09-20 09:47:06 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:06.188649 | orchestrator | 2025-09-20 09:47:06 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:06.190198 | orchestrator | 2025-09-20 09:47:06 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:06.190226 | orchestrator | 2025-09-20 09:47:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:09.230483 | orchestrator | 2025-09-20 09:47:09 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:09.232083 | orchestrator | 2025-09-20 09:47:09 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:09.233966 | orchestrator | 2025-09-20 09:47:09 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:09.234517 | orchestrator | 2025-09-20 09:47:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:12.277600 | orchestrator | 2025-09-20 09:47:12 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:12.278683 | orchestrator | 2025-09-20 09:47:12 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:12.280499 | orchestrator | 2025-09-20 09:47:12 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:12.280541 | orchestrator | 2025-09-20 09:47:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:15.314640 | orchestrator | 2025-09-20 09:47:15 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:15.314797 | orchestrator | 2025-09-20 09:47:15 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:15.315309 | orchestrator | 2025-09-20 09:47:15 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:15.315334 | orchestrator | 2025-09-20 09:47:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:18.356113 | orchestrator | 2025-09-20 09:47:18 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:18.356935 | orchestrator | 2025-09-20 09:47:18 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:18.359313 | orchestrator | 2025-09-20 09:47:18 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:18.359340 | orchestrator | 2025-09-20 09:47:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:21.391121 | orchestrator | 2025-09-20 09:47:21 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:21.391301 | orchestrator | 2025-09-20 09:47:21 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:21.391845 | orchestrator | 2025-09-20 09:47:21 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:21.392020 | orchestrator | 2025-09-20 09:47:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:24.439375 | orchestrator | 2025-09-20 09:47:24 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:24.441487 | orchestrator | 2025-09-20 09:47:24 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:24.443123 | orchestrator | 2025-09-20 09:47:24 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:24.443151 | orchestrator | 2025-09-20 09:47:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:27.493426 | orchestrator | 2025-09-20 09:47:27 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:27.498532 | orchestrator | 2025-09-20 09:47:27 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:27.501272 | orchestrator | 2025-09-20 09:47:27 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state STARTED 2025-09-20 09:47:27.501297 | orchestrator | 2025-09-20 09:47:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:30.551894 | orchestrator | 2025-09-20 09:47:30 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:30.553609 | orchestrator | 2025-09-20 09:47:30 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:30.555728 | orchestrator | 2025-09-20 09:47:30 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:30.562638 | orchestrator | 2025-09-20 09:47:30 | INFO  | Task 73b6c484-0da2-4565-b81c-53702356cc50 is in state SUCCESS 2025-09-20 09:47:30.563011 | orchestrator | 2025-09-20 09:47:30.565259 | orchestrator | 2025-09-20 09:47:30.565288 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-20 09:47:30.565298 | orchestrator | 2025-09-20 09:47:30.565307 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-20 09:47:30.565316 | orchestrator | Saturday 20 September 2025 09:36:24 +0000 (0:00:00.814) 0:00:00.814 **** 2025-09-20 09:47:30.565327 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.565337 | orchestrator | 2025-09-20 09:47:30.565346 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-20 09:47:30.565354 | orchestrator | Saturday 20 September 2025 09:36:26 +0000 (0:00:01.190) 0:00:02.005 **** 2025-09-20 09:47:30.565363 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.565373 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.565382 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.565390 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.565399 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.565407 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.565416 | orchestrator | 2025-09-20 09:47:30.565425 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-20 09:47:30.565433 | orchestrator | Saturday 20 September 2025 09:36:27 +0000 (0:00:01.591) 0:00:03.596 **** 2025-09-20 09:47:30.565442 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.565451 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.565459 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.565468 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.565476 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.565485 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.565493 | orchestrator | 2025-09-20 09:47:30.565502 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-20 09:47:30.565511 | orchestrator | Saturday 20 September 2025 09:36:28 +0000 (0:00:01.010) 0:00:04.606 **** 2025-09-20 09:47:30.565519 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.565549 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.565558 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.565566 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.565575 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.565583 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.565592 | orchestrator | 2025-09-20 09:47:30.565600 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-20 09:47:30.565609 | orchestrator | Saturday 20 September 2025 09:36:29 +0000 (0:00:01.073) 0:00:05.680 **** 2025-09-20 09:47:30.565617 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.565626 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.565634 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.565643 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.565651 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.565660 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.565669 | orchestrator | 2025-09-20 09:47:30.565677 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-20 09:47:30.565686 | orchestrator | Saturday 20 September 2025 09:36:30 +0000 (0:00:00.717) 0:00:06.397 **** 2025-09-20 09:47:30.565694 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.565703 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.565711 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.565720 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.565728 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.565736 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.565745 | orchestrator | 2025-09-20 09:47:30.565753 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-20 09:47:30.565762 | orchestrator | Saturday 20 September 2025 09:36:31 +0000 (0:00:00.628) 0:00:07.026 **** 2025-09-20 09:47:30.565771 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.565779 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.565787 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.565796 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.565804 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.565812 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.565821 | orchestrator | 2025-09-20 09:47:30.565830 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-20 09:47:30.565839 | orchestrator | Saturday 20 September 2025 09:36:32 +0000 (0:00:01.063) 0:00:08.090 **** 2025-09-20 09:47:30.565847 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.565857 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.565865 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.565876 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.566000 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.566010 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.566086 | orchestrator | 2025-09-20 09:47:30.566097 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-20 09:47:30.566107 | orchestrator | Saturday 20 September 2025 09:36:33 +0000 (0:00:01.170) 0:00:09.260 **** 2025-09-20 09:47:30.566117 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.566127 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.566136 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.566146 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.566155 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.566165 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.566174 | orchestrator | 2025-09-20 09:47:30.566183 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-20 09:47:30.566193 | orchestrator | Saturday 20 September 2025 09:36:34 +0000 (0:00:01.057) 0:00:10.318 **** 2025-09-20 09:47:30.566203 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:47:30.566213 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:47:30.566231 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:47:30.566240 | orchestrator | 2025-09-20 09:47:30.566249 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-20 09:47:30.566267 | orchestrator | Saturday 20 September 2025 09:36:35 +0000 (0:00:00.700) 0:00:11.019 **** 2025-09-20 09:47:30.566276 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.566284 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.566293 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.566301 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.566309 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.566318 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.566326 | orchestrator | 2025-09-20 09:47:30.566345 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-20 09:47:30.566355 | orchestrator | Saturday 20 September 2025 09:36:36 +0000 (0:00:00.980) 0:00:11.999 **** 2025-09-20 09:47:30.566363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:47:30.566372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:47:30.566380 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:47:30.566389 | orchestrator | 2025-09-20 09:47:30.566398 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-20 09:47:30.566406 | orchestrator | Saturday 20 September 2025 09:36:39 +0000 (0:00:03.410) 0:00:15.410 **** 2025-09-20 09:47:30.566415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 09:47:30.566423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 09:47:30.566432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 09:47:30.566440 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.566449 | orchestrator | 2025-09-20 09:47:30.566457 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-20 09:47:30.566466 | orchestrator | Saturday 20 September 2025 09:36:40 +0000 (0:00:00.905) 0:00:16.315 **** 2025-09-20 09:47:30.566498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566569 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.566578 | orchestrator | 2025-09-20 09:47:30.566586 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-20 09:47:30.566632 | orchestrator | Saturday 20 September 2025 09:36:42 +0000 (0:00:01.754) 0:00:18.070 **** 2025-09-20 09:47:30.566644 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566681 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.566690 | orchestrator | 2025-09-20 09:47:30.566748 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-20 09:47:30.566758 | orchestrator | Saturday 20 September 2025 09:36:42 +0000 (0:00:00.839) 0:00:18.909 **** 2025-09-20 09:47:30.566780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-20 09:36:36.937802', 'end': '2025-09-20 09:36:37.287606', 'delta': '0:00:00.349804', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-20 09:36:38.017753', 'end': '2025-09-20 09:36:38.313484', 'delta': '0:00:00.295731', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-20 09:36:38.911025', 'end': '2025-09-20 09:36:39.229476', 'delta': '0:00:00.318451', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.566812 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.566820 | orchestrator | 2025-09-20 09:47:30.566829 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-20 09:47:30.566838 | orchestrator | Saturday 20 September 2025 09:36:43 +0000 (0:00:00.824) 0:00:19.733 **** 2025-09-20 09:47:30.566846 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.566855 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.566864 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.566872 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.566881 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.566889 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.566898 | orchestrator | 2025-09-20 09:47:30.566906 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-20 09:47:30.566915 | orchestrator | Saturday 20 September 2025 09:36:46 +0000 (0:00:02.764) 0:00:22.497 **** 2025-09-20 09:47:30.566924 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.566933 | orchestrator | 2025-09-20 09:47:30.566941 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-20 09:47:30.566956 | orchestrator | Saturday 20 September 2025 09:36:47 +0000 (0:00:00.658) 0:00:23.156 **** 2025-09-20 09:47:30.566964 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.566973 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.566981 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.566990 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.566998 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567007 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567016 | orchestrator | 2025-09-20 09:47:30.567024 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-20 09:47:30.567033 | orchestrator | Saturday 20 September 2025 09:36:48 +0000 (0:00:01.153) 0:00:24.310 **** 2025-09-20 09:47:30.567041 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567068 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567077 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567085 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567094 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567126 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567135 | orchestrator | 2025-09-20 09:47:30.567144 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 09:47:30.567153 | orchestrator | Saturday 20 September 2025 09:36:49 +0000 (0:00:00.981) 0:00:25.291 **** 2025-09-20 09:47:30.567181 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567191 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567199 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567208 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567264 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567274 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567282 | orchestrator | 2025-09-20 09:47:30.567291 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-20 09:47:30.567299 | orchestrator | Saturday 20 September 2025 09:36:50 +0000 (0:00:00.811) 0:00:26.103 **** 2025-09-20 09:47:30.567364 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567373 | orchestrator | 2025-09-20 09:47:30.567381 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-20 09:47:30.567390 | orchestrator | Saturday 20 September 2025 09:36:50 +0000 (0:00:00.130) 0:00:26.234 **** 2025-09-20 09:47:30.567403 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567433 | orchestrator | 2025-09-20 09:47:30.567468 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 09:47:30.567478 | orchestrator | Saturday 20 September 2025 09:36:50 +0000 (0:00:00.231) 0:00:26.465 **** 2025-09-20 09:47:30.567487 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567496 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567504 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567512 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567521 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567529 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567538 | orchestrator | 2025-09-20 09:47:30.567552 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-20 09:47:30.567561 | orchestrator | Saturday 20 September 2025 09:36:51 +0000 (0:00:00.588) 0:00:27.053 **** 2025-09-20 09:47:30.567570 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567578 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567587 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567595 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567603 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567612 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567620 | orchestrator | 2025-09-20 09:47:30.567629 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-20 09:47:30.567637 | orchestrator | Saturday 20 September 2025 09:36:51 +0000 (0:00:00.821) 0:00:27.874 **** 2025-09-20 09:47:30.567652 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567660 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567669 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567677 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567686 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567694 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567702 | orchestrator | 2025-09-20 09:47:30.567711 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-20 09:47:30.567720 | orchestrator | Saturday 20 September 2025 09:36:52 +0000 (0:00:00.711) 0:00:28.586 **** 2025-09-20 09:47:30.567728 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567736 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567745 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567753 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567762 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567770 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567837 | orchestrator | 2025-09-20 09:47:30.567847 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-20 09:47:30.567855 | orchestrator | Saturday 20 September 2025 09:36:53 +0000 (0:00:00.824) 0:00:29.410 **** 2025-09-20 09:47:30.567864 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567872 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567881 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567889 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567897 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567906 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567914 | orchestrator | 2025-09-20 09:47:30.567923 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-20 09:47:30.567931 | orchestrator | Saturday 20 September 2025 09:36:54 +0000 (0:00:00.705) 0:00:30.116 **** 2025-09-20 09:47:30.567940 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.567948 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.567956 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.567965 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.567973 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.567981 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.567990 | orchestrator | 2025-09-20 09:47:30.568023 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-20 09:47:30.568033 | orchestrator | Saturday 20 September 2025 09:36:55 +0000 (0:00:01.734) 0:00:31.850 **** 2025-09-20 09:47:30.568041 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.568127 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.568137 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.568146 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.568154 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.568204 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.568213 | orchestrator | 2025-09-20 09:47:30.568222 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-20 09:47:30.568230 | orchestrator | Saturday 20 September 2025 09:36:57 +0000 (0:00:01.454) 0:00:33.304 **** 2025-09-20 09:47:30.568240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22', 'dm-uuid-LVM-DnxaRx4DprVvTXzxq8pMkQFvz3WaKE38Lyl8FSyIkpr1S80xWH0OiUpXNiW0RKeS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a', 'dm-uuid-LVM-07jPszdudCYLb2kASjjnJtPSDZyJdJjQhxeBPSpwXeHMqnT4tfVmcxh3U6deVX6u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c', 'dm-uuid-LVM-c3et89XgjnYzPyeJL9a81ueXLiENcEOzlZVVYIoRqRR2d3uSdOIpiK7du2GL1b3C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9', 'dm-uuid-LVM-03WnYp6gxYyqDFetCQKqxkq0bm37VEwg0Vwjfen20ut1ZR6SH0cF2Nawnj8KCybv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568538 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N2ncj8-uyRk-vw9F-J5nI-U1nn-KYce-KddKqt', 'scsi-0QEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9', 'scsi-SQEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TJkZzW-8Taz-pYNg-NNzH-IGej-j2Gt-WcbLkR', 'scsi-0QEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650', 'scsi-SQEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b', 'scsi-SQEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O982OA-wzSU-Y1e0-LRH0-wgZa-u0Jn-23wVP7', 'scsi-0QEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a', 'scsi-SQEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cABhdL-iJOq-eVrR-Dqgx-nq7q-XgIR-oWwkmG', 'scsi-0QEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9', 'scsi-SQEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26', 'scsi-SQEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568729 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.568738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126', 'dm-uuid-LVM-SRbLLW0bcwwOR0uc4hmvTM1QEiG0HhbjLT2nH0SZBAt0CHunNFLdADyuLankUCNB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4', 'dm-uuid-LVM-wXEP2xjRPSa6cJb6tnE8v9DVUuVIBoookWjzCnwiNfdLk3lO02TOwJ410DYgvQQp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vDKqvI-weOD-MTIA-gDzz-9iik-tIGJ-YonfAo', 'scsi-0QEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57', 'scsi-SQEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568944 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.568953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wrW4BB-ofv8-nPOQ-UXqq-qjVP-7pjM-mMJve7', 'scsi-0QEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5', 'scsi-SQEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d', 'scsi-SQEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.568987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.568996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569154 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.569163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part1', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part14', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part15', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part16', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.569233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.569243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.569419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.569429 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.569437 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.569446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:47:30.569571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.569604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:47:30.569615 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.569624 | orchestrator | 2025-09-20 09:47:30.569632 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-20 09:47:30.569641 | orchestrator | Saturday 20 September 2025 09:36:59 +0000 (0:00:01.829) 0:00:35.133 **** 2025-09-20 09:47:30.569651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22', 'dm-uuid-LVM-DnxaRx4DprVvTXzxq8pMkQFvz3WaKE38Lyl8FSyIkpr1S80xWH0OiUpXNiW0RKeS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.569667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a', 'dm-uuid-LVM-07jPszdudCYLb2kASjjnJtPSDZyJdJjQhxeBPSpwXeHMqnT4tfVmcxh3U6deVX6u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.569676 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.569686 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.569699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.570872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.570903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.570923 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.570932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.570941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571033 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571084 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N2ncj8-uyRk-vw9F-J5nI-U1nn-KYce-KddKqt', 'scsi-0QEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9', 'scsi-SQEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TJkZzW-8Taz-pYNg-NNzH-IGej-j2Gt-WcbLkR', 'scsi-0QEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650', 'scsi-SQEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b', 'scsi-SQEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571162 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c', 'dm-uuid-LVM-c3et89XgjnYzPyeJL9a81ueXLiENcEOzlZVVYIoRqRR2d3uSdOIpiK7du2GL1b3C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571225 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9', 'dm-uuid-LVM-03WnYp6gxYyqDFetCQKqxkq0bm37VEwg0Vwjfen20ut1ZR6SH0cF2Nawnj8KCybv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571245 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571273 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571311 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571337 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571366 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O982OA-wzSU-Y1e0-LRH0-wgZa-u0Jn-23wVP7', 'scsi-0QEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a', 'scsi-SQEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571392 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cABhdL-iJOq-eVrR-Dqgx-nq7q-XgIR-oWwkmG', 'scsi-0QEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9', 'scsi-SQEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571401 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.571411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26', 'scsi-SQEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571445 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126', 'dm-uuid-LVM-SRbLLW0bcwwOR0uc4hmvTM1QEiG0HhbjLT2nH0SZBAt0CHunNFLdADyuLankUCNB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571454 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4', 'dm-uuid-LVM-wXEP2xjRPSa6cJb6tnE8v9DVUuVIBoookWjzCnwiNfdLk3lO02TOwJ410DYgvQQp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571481 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.571494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571538 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571548 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571559 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571605 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571616 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571637 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571647 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571672 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part1', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part14', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part15', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part16', 'scsi-SQEMU_QEMU_HARDDISK_03b2b7ad-c51a-4c61-a057-9ad554ca1a72-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571689 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571722 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vDKqvI-weOD-MTIA-gDzz-9iik-tIGJ-YonfAo', 'scsi-0QEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57', 'scsi-SQEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571752 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.571762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wrW4BB-ofv8-nPOQ-UXqq-qjVP-7pjM-mMJve7', 'scsi-0QEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5', 'scsi-SQEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571773 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d', 'scsi-SQEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571814 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571831 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571842 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.571853 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571862 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571870 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571879 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571897 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571912 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571922 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571931 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571945 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_d4ce8cad-bc1d-4843-90cc-8408c6fa71a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571968 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571978 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.571987 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.571996 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572005 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572014 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572032 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572099 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572110 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572120 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_ed050e13-890c-4196-a879-2427cfc2dfe9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572141 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:47:30.572150 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.572159 | orchestrator | 2025-09-20 09:47:30.572168 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-20 09:47:30.572178 | orchestrator | Saturday 20 September 2025 09:37:01 +0000 (0:00:02.728) 0:00:37.861 **** 2025-09-20 09:47:30.572192 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.572202 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.572211 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.572219 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.572228 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.572236 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.572245 | orchestrator | 2025-09-20 09:47:30.572254 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-20 09:47:30.572262 | orchestrator | Saturday 20 September 2025 09:37:03 +0000 (0:00:01.347) 0:00:39.209 **** 2025-09-20 09:47:30.572271 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.572279 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.572288 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.572296 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.572305 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.572313 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.572322 | orchestrator | 2025-09-20 09:47:30.572330 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 09:47:30.572339 | orchestrator | Saturday 20 September 2025 09:37:03 +0000 (0:00:00.727) 0:00:39.937 **** 2025-09-20 09:47:30.572347 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.572356 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.572364 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.572373 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.572381 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.572390 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.572398 | orchestrator | 2025-09-20 09:47:30.572407 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 09:47:30.572415 | orchestrator | Saturday 20 September 2025 09:37:04 +0000 (0:00:00.823) 0:00:40.761 **** 2025-09-20 09:47:30.572424 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.572432 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.572441 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.572449 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.572458 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.572466 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.572475 | orchestrator | 2025-09-20 09:47:30.572483 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 09:47:30.572491 | orchestrator | Saturday 20 September 2025 09:37:05 +0000 (0:00:00.756) 0:00:41.517 **** 2025-09-20 09:47:30.572498 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.572511 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.572519 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.572527 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.572534 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.572542 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.572550 | orchestrator | 2025-09-20 09:47:30.572558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 09:47:30.572565 | orchestrator | Saturday 20 September 2025 09:37:06 +0000 (0:00:01.193) 0:00:42.711 **** 2025-09-20 09:47:30.572573 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.572581 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.572589 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.572596 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.572604 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.572612 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.572620 | orchestrator | 2025-09-20 09:47:30.572628 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-20 09:47:30.572635 | orchestrator | Saturday 20 September 2025 09:37:07 +0000 (0:00:00.876) 0:00:43.587 **** 2025-09-20 09:47:30.572643 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-20 09:47:30.572651 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-20 09:47:30.572659 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-20 09:47:30.572667 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-20 09:47:30.572674 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 09:47:30.572682 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-20 09:47:30.572690 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-20 09:47:30.572697 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-20 09:47:30.572705 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-20 09:47:30.572713 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-20 09:47:30.572720 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-20 09:47:30.572728 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-20 09:47:30.572735 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-20 09:47:30.572743 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-20 09:47:30.572751 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-20 09:47:30.572759 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-20 09:47:30.572766 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-20 09:47:30.572774 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-20 09:47:30.572782 | orchestrator | 2025-09-20 09:47:30.572789 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-20 09:47:30.572797 | orchestrator | Saturday 20 September 2025 09:37:11 +0000 (0:00:03.752) 0:00:47.340 **** 2025-09-20 09:47:30.572805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 09:47:30.572813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 09:47:30.572820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 09:47:30.572835 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.572843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-20 09:47:30.572850 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-20 09:47:30.572858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-20 09:47:30.572866 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-20 09:47:30.572874 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-20 09:47:30.572881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-20 09:47:30.572893 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.572902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 09:47:30.572915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 09:47:30.572923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 09:47:30.572930 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.572938 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-20 09:47:30.572946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-20 09:47:30.572954 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-20 09:47:30.572961 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.572969 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.572977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-20 09:47:30.572985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-20 09:47:30.572992 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-20 09:47:30.573000 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.573008 | orchestrator | 2025-09-20 09:47:30.573016 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-20 09:47:30.573024 | orchestrator | Saturday 20 September 2025 09:37:12 +0000 (0:00:01.120) 0:00:48.461 **** 2025-09-20 09:47:30.573031 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.573039 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.573062 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.573070 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.573078 | orchestrator | 2025-09-20 09:47:30.573086 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-20 09:47:30.573095 | orchestrator | Saturday 20 September 2025 09:37:13 +0000 (0:00:01.281) 0:00:49.742 **** 2025-09-20 09:47:30.573102 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.573110 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.573118 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.573126 | orchestrator | 2025-09-20 09:47:30.573133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-20 09:47:30.573141 | orchestrator | Saturday 20 September 2025 09:37:14 +0000 (0:00:00.351) 0:00:50.094 **** 2025-09-20 09:47:30.573149 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.573157 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.573164 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.573172 | orchestrator | 2025-09-20 09:47:30.573180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-20 09:47:30.573188 | orchestrator | Saturday 20 September 2025 09:37:14 +0000 (0:00:00.333) 0:00:50.427 **** 2025-09-20 09:47:30.573196 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.573203 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.573211 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.573218 | orchestrator | 2025-09-20 09:47:30.573226 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-20 09:47:30.573234 | orchestrator | Saturday 20 September 2025 09:37:14 +0000 (0:00:00.350) 0:00:50.778 **** 2025-09-20 09:47:30.573242 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.573250 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.573257 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.573265 | orchestrator | 2025-09-20 09:47:30.573273 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-20 09:47:30.573281 | orchestrator | Saturday 20 September 2025 09:37:15 +0000 (0:00:00.776) 0:00:51.554 **** 2025-09-20 09:47:30.573288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.573296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.573304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.573312 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.573325 | orchestrator | 2025-09-20 09:47:30.573333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-20 09:47:30.573340 | orchestrator | Saturday 20 September 2025 09:37:15 +0000 (0:00:00.417) 0:00:51.971 **** 2025-09-20 09:47:30.573348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.573356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.573364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.573372 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.573379 | orchestrator | 2025-09-20 09:47:30.573387 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-20 09:47:30.573395 | orchestrator | Saturday 20 September 2025 09:37:16 +0000 (0:00:00.355) 0:00:52.327 **** 2025-09-20 09:47:30.573403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.573410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.573418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.573426 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.573433 | orchestrator | 2025-09-20 09:47:30.573441 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-20 09:47:30.573449 | orchestrator | Saturday 20 September 2025 09:37:16 +0000 (0:00:00.398) 0:00:52.725 **** 2025-09-20 09:47:30.573457 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.573468 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.573476 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.573484 | orchestrator | 2025-09-20 09:47:30.573492 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-20 09:47:30.573499 | orchestrator | Saturday 20 September 2025 09:37:17 +0000 (0:00:00.294) 0:00:53.020 **** 2025-09-20 09:47:30.573507 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 09:47:30.573515 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-20 09:47:30.573523 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-20 09:47:30.573530 | orchestrator | 2025-09-20 09:47:30.573755 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-20 09:47:30.573768 | orchestrator | Saturday 20 September 2025 09:37:17 +0000 (0:00:00.588) 0:00:53.609 **** 2025-09-20 09:47:30.573777 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:47:30.573785 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:47:30.573793 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:47:30.573801 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 09:47:30.573809 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 09:47:30.573817 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 09:47:30.573825 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 09:47:30.573833 | orchestrator | 2025-09-20 09:47:30.573841 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-20 09:47:30.573848 | orchestrator | Saturday 20 September 2025 09:37:18 +0000 (0:00:01.028) 0:00:54.637 **** 2025-09-20 09:47:30.573856 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:47:30.573864 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:47:30.573872 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:47:30.573880 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 09:47:30.573887 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 09:47:30.573895 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 09:47:30.573903 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 09:47:30.573917 | orchestrator | 2025-09-20 09:47:30.573925 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.573933 | orchestrator | Saturday 20 September 2025 09:37:20 +0000 (0:00:01.925) 0:00:56.563 **** 2025-09-20 09:47:30.573941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.573950 | orchestrator | 2025-09-20 09:47:30.573957 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.573965 | orchestrator | Saturday 20 September 2025 09:37:21 +0000 (0:00:01.367) 0:00:57.930 **** 2025-09-20 09:47:30.573973 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.573981 | orchestrator | 2025-09-20 09:47:30.573989 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.573997 | orchestrator | Saturday 20 September 2025 09:37:23 +0000 (0:00:01.303) 0:00:59.234 **** 2025-09-20 09:47:30.574005 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.574012 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.574081 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.574090 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.574098 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.574106 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.574114 | orchestrator | 2025-09-20 09:47:30.574122 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.574130 | orchestrator | Saturday 20 September 2025 09:37:24 +0000 (0:00:01.198) 0:01:00.433 **** 2025-09-20 09:47:30.574138 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574146 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574154 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574162 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.574170 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.574178 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.574186 | orchestrator | 2025-09-20 09:47:30.574194 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.574201 | orchestrator | Saturday 20 September 2025 09:37:25 +0000 (0:00:01.087) 0:01:01.521 **** 2025-09-20 09:47:30.574209 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.574217 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.574225 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.574249 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574258 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574265 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574273 | orchestrator | 2025-09-20 09:47:30.574281 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.574289 | orchestrator | Saturday 20 September 2025 09:37:27 +0000 (0:00:01.933) 0:01:03.454 **** 2025-09-20 09:47:30.574297 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574305 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.574312 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574320 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.574328 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.574336 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574344 | orchestrator | 2025-09-20 09:47:30.574357 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.574366 | orchestrator | Saturday 20 September 2025 09:37:28 +0000 (0:00:01.096) 0:01:04.551 **** 2025-09-20 09:47:30.574375 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.574384 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.574392 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.574401 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.574410 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.574426 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.574435 | orchestrator | 2025-09-20 09:47:30.574444 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.574459 | orchestrator | Saturday 20 September 2025 09:37:29 +0000 (0:00:01.144) 0:01:05.695 **** 2025-09-20 09:47:30.574469 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.574478 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.574487 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.574496 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574505 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574514 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574523 | orchestrator | 2025-09-20 09:47:30.574532 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.574541 | orchestrator | Saturday 20 September 2025 09:37:30 +0000 (0:00:00.746) 0:01:06.442 **** 2025-09-20 09:47:30.574550 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.574559 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.574568 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.574577 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574586 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574595 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574603 | orchestrator | 2025-09-20 09:47:30.574612 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.574621 | orchestrator | Saturday 20 September 2025 09:37:31 +0000 (0:00:00.623) 0:01:07.065 **** 2025-09-20 09:47:30.574630 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.574639 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.574648 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.574657 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.574666 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.574675 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.574684 | orchestrator | 2025-09-20 09:47:30.574693 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.574702 | orchestrator | Saturday 20 September 2025 09:37:32 +0000 (0:00:01.416) 0:01:08.481 **** 2025-09-20 09:47:30.574711 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.574720 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.574728 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.574736 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.574744 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.574752 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.574759 | orchestrator | 2025-09-20 09:47:30.574767 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.574775 | orchestrator | Saturday 20 September 2025 09:37:33 +0000 (0:00:01.415) 0:01:09.897 **** 2025-09-20 09:47:30.574783 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.574791 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.574799 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.574807 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574814 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574822 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574830 | orchestrator | 2025-09-20 09:47:30.574838 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.574845 | orchestrator | Saturday 20 September 2025 09:37:34 +0000 (0:00:00.874) 0:01:10.771 **** 2025-09-20 09:47:30.574853 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.574861 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.574869 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.574877 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.574884 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.574892 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.574900 | orchestrator | 2025-09-20 09:47:30.574908 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.574916 | orchestrator | Saturday 20 September 2025 09:37:35 +0000 (0:00:00.752) 0:01:11.523 **** 2025-09-20 09:47:30.574929 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.574937 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.574945 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.574953 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.574961 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.574969 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.574976 | orchestrator | 2025-09-20 09:47:30.574984 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.574992 | orchestrator | Saturday 20 September 2025 09:37:36 +0000 (0:00:01.356) 0:01:12.880 **** 2025-09-20 09:47:30.575000 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.575008 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.575016 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.575023 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575031 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575039 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575090 | orchestrator | 2025-09-20 09:47:30.575099 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.575107 | orchestrator | Saturday 20 September 2025 09:37:37 +0000 (0:00:00.536) 0:01:13.417 **** 2025-09-20 09:47:30.575115 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.575123 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.575130 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.575138 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575146 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575154 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575161 | orchestrator | 2025-09-20 09:47:30.575169 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.575177 | orchestrator | Saturday 20 September 2025 09:37:38 +0000 (0:00:00.749) 0:01:14.166 **** 2025-09-20 09:47:30.575185 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.575193 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.575200 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.575208 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575216 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575224 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575235 | orchestrator | 2025-09-20 09:47:30.575244 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.575252 | orchestrator | Saturday 20 September 2025 09:37:38 +0000 (0:00:00.581) 0:01:14.748 **** 2025-09-20 09:47:30.575260 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.575267 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.575275 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.575283 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575291 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575298 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575306 | orchestrator | 2025-09-20 09:47:30.575318 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.575327 | orchestrator | Saturday 20 September 2025 09:37:39 +0000 (0:00:00.709) 0:01:15.457 **** 2025-09-20 09:47:30.575334 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.575341 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.575347 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.575354 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.575361 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.575367 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.575374 | orchestrator | 2025-09-20 09:47:30.575381 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.575387 | orchestrator | Saturday 20 September 2025 09:37:40 +0000 (0:00:00.556) 0:01:16.014 **** 2025-09-20 09:47:30.575394 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.575400 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.575407 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.575418 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.575425 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.575431 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.575438 | orchestrator | 2025-09-20 09:47:30.575445 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.575451 | orchestrator | Saturday 20 September 2025 09:37:41 +0000 (0:00:01.044) 0:01:17.058 **** 2025-09-20 09:47:30.575458 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.575464 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.575471 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.575477 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.575484 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.575490 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.575497 | orchestrator | 2025-09-20 09:47:30.575504 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-20 09:47:30.575510 | orchestrator | Saturday 20 September 2025 09:37:42 +0000 (0:00:01.242) 0:01:18.300 **** 2025-09-20 09:47:30.575517 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.575524 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.575530 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.575537 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.575544 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.575550 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.575557 | orchestrator | 2025-09-20 09:47:30.575563 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-20 09:47:30.575570 | orchestrator | Saturday 20 September 2025 09:37:44 +0000 (0:00:01.858) 0:01:20.159 **** 2025-09-20 09:47:30.575577 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.575583 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.575590 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.575596 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.575603 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.575609 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.575616 | orchestrator | 2025-09-20 09:47:30.575623 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-20 09:47:30.575629 | orchestrator | Saturday 20 September 2025 09:37:46 +0000 (0:00:02.160) 0:01:22.319 **** 2025-09-20 09:47:30.575636 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.575643 | orchestrator | 2025-09-20 09:47:30.575649 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-20 09:47:30.575656 | orchestrator | Saturday 20 September 2025 09:37:47 +0000 (0:00:01.204) 0:01:23.524 **** 2025-09-20 09:47:30.575663 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.575669 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.575676 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.575682 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575689 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575696 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575702 | orchestrator | 2025-09-20 09:47:30.575709 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-20 09:47:30.575715 | orchestrator | Saturday 20 September 2025 09:37:48 +0000 (0:00:00.616) 0:01:24.140 **** 2025-09-20 09:47:30.575722 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.575728 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.575735 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.575741 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575748 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575754 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575761 | orchestrator | 2025-09-20 09:47:30.575768 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-20 09:47:30.575774 | orchestrator | Saturday 20 September 2025 09:37:48 +0000 (0:00:00.822) 0:01:24.963 **** 2025-09-20 09:47:30.575787 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 09:47:30.575793 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 09:47:30.575800 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 09:47:30.575807 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 09:47:30.575813 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 09:47:30.575820 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-20 09:47:30.575830 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 09:47:30.575837 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 09:47:30.575843 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 09:47:30.575850 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 09:47:30.575856 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 09:47:30.575866 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-20 09:47:30.575873 | orchestrator | 2025-09-20 09:47:30.575880 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-20 09:47:30.575886 | orchestrator | Saturday 20 September 2025 09:37:50 +0000 (0:00:01.363) 0:01:26.327 **** 2025-09-20 09:47:30.575893 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.575900 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.575907 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.575913 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.575920 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.575926 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.575933 | orchestrator | 2025-09-20 09:47:30.575940 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-20 09:47:30.575946 | orchestrator | Saturday 20 September 2025 09:37:51 +0000 (0:00:01.218) 0:01:27.546 **** 2025-09-20 09:47:30.575953 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.575959 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.575966 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.575973 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.575979 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.575986 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.575992 | orchestrator | 2025-09-20 09:47:30.575999 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-20 09:47:30.576005 | orchestrator | Saturday 20 September 2025 09:37:52 +0000 (0:00:00.655) 0:01:28.202 **** 2025-09-20 09:47:30.576012 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576019 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576025 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576032 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576038 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576056 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576063 | orchestrator | 2025-09-20 09:47:30.576070 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-20 09:47:30.576077 | orchestrator | Saturday 20 September 2025 09:37:53 +0000 (0:00:00.824) 0:01:29.026 **** 2025-09-20 09:47:30.576084 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576090 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576097 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576103 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576110 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576116 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576123 | orchestrator | 2025-09-20 09:47:30.576130 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-20 09:47:30.576142 | orchestrator | Saturday 20 September 2025 09:37:53 +0000 (0:00:00.498) 0:01:29.524 **** 2025-09-20 09:47:30.576149 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.576156 | orchestrator | 2025-09-20 09:47:30.576162 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-20 09:47:30.576169 | orchestrator | Saturday 20 September 2025 09:37:54 +0000 (0:00:00.997) 0:01:30.522 **** 2025-09-20 09:47:30.576175 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.576182 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.576189 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.576195 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.576202 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.576209 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.576215 | orchestrator | 2025-09-20 09:47:30.576222 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-20 09:47:30.576229 | orchestrator | Saturday 20 September 2025 09:38:45 +0000 (0:00:50.636) 0:02:21.158 **** 2025-09-20 09:47:30.576235 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 09:47:30.576242 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 09:47:30.576249 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 09:47:30.576255 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576262 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 09:47:30.576269 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 09:47:30.576275 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 09:47:30.576282 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576289 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 09:47:30.576295 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 09:47:30.576302 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 09:47:30.576308 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576315 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 09:47:30.576322 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 09:47:30.576328 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 09:47:30.576335 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576345 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 09:47:30.576352 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 09:47:30.576358 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 09:47:30.576365 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576372 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-20 09:47:30.576381 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-20 09:47:30.576388 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-20 09:47:30.576395 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576402 | orchestrator | 2025-09-20 09:47:30.576409 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-20 09:47:30.576415 | orchestrator | Saturday 20 September 2025 09:38:45 +0000 (0:00:00.724) 0:02:21.883 **** 2025-09-20 09:47:30.576422 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576428 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576435 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576463 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576471 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576477 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576484 | orchestrator | 2025-09-20 09:47:30.576491 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-20 09:47:30.576498 | orchestrator | Saturday 20 September 2025 09:38:46 +0000 (0:00:01.107) 0:02:22.990 **** 2025-09-20 09:47:30.576504 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576511 | orchestrator | 2025-09-20 09:47:30.576518 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-20 09:47:30.576524 | orchestrator | Saturday 20 September 2025 09:38:47 +0000 (0:00:00.186) 0:02:23.177 **** 2025-09-20 09:47:30.576531 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576538 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576544 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576551 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576557 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576564 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576571 | orchestrator | 2025-09-20 09:47:30.576577 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-20 09:47:30.576584 | orchestrator | Saturday 20 September 2025 09:38:47 +0000 (0:00:00.719) 0:02:23.897 **** 2025-09-20 09:47:30.576591 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576597 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576604 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576610 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576617 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576624 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576630 | orchestrator | 2025-09-20 09:47:30.576637 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-20 09:47:30.576643 | orchestrator | Saturday 20 September 2025 09:38:48 +0000 (0:00:00.928) 0:02:24.825 **** 2025-09-20 09:47:30.576650 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576657 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576663 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576670 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576676 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576683 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576689 | orchestrator | 2025-09-20 09:47:30.576696 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-20 09:47:30.576703 | orchestrator | Saturday 20 September 2025 09:38:49 +0000 (0:00:00.672) 0:02:25.498 **** 2025-09-20 09:47:30.576709 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.576716 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.576723 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.576729 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.576736 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.576742 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.576749 | orchestrator | 2025-09-20 09:47:30.576756 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-20 09:47:30.576762 | orchestrator | Saturday 20 September 2025 09:38:51 +0000 (0:00:02.471) 0:02:27.969 **** 2025-09-20 09:47:30.576769 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.576775 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.576782 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.576789 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.576795 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.576802 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.576808 | orchestrator | 2025-09-20 09:47:30.576815 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-20 09:47:30.576822 | orchestrator | Saturday 20 September 2025 09:38:52 +0000 (0:00:00.543) 0:02:28.513 **** 2025-09-20 09:47:30.576829 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.576843 | orchestrator | 2025-09-20 09:47:30.576850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-20 09:47:30.576857 | orchestrator | Saturday 20 September 2025 09:38:53 +0000 (0:00:01.071) 0:02:29.585 **** 2025-09-20 09:47:30.576863 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576870 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576876 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576883 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576890 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576896 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576903 | orchestrator | 2025-09-20 09:47:30.576909 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-20 09:47:30.576916 | orchestrator | Saturday 20 September 2025 09:38:54 +0000 (0:00:00.566) 0:02:30.151 **** 2025-09-20 09:47:30.576923 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576929 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576936 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.576942 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.576952 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.576959 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.576966 | orchestrator | 2025-09-20 09:47:30.576973 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-20 09:47:30.576979 | orchestrator | Saturday 20 September 2025 09:38:54 +0000 (0:00:00.764) 0:02:30.916 **** 2025-09-20 09:47:30.576986 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.576992 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.576999 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.577006 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.577012 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.577022 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.577029 | orchestrator | 2025-09-20 09:47:30.577036 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-20 09:47:30.577042 | orchestrator | Saturday 20 September 2025 09:38:55 +0000 (0:00:00.525) 0:02:31.441 **** 2025-09-20 09:47:30.577090 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.577097 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.577104 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.577110 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.577117 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.577123 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.577130 | orchestrator | 2025-09-20 09:47:30.577137 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-20 09:47:30.577143 | orchestrator | Saturday 20 September 2025 09:38:56 +0000 (0:00:00.745) 0:02:32.186 **** 2025-09-20 09:47:30.577150 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.577156 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.577163 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.577170 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.577176 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.577182 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.577189 | orchestrator | 2025-09-20 09:47:30.577196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-20 09:47:30.577202 | orchestrator | Saturday 20 September 2025 09:38:56 +0000 (0:00:00.566) 0:02:32.753 **** 2025-09-20 09:47:30.577209 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.577215 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.577222 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.577229 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.577235 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.577242 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.577248 | orchestrator | 2025-09-20 09:47:30.577255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-20 09:47:30.577266 | orchestrator | Saturday 20 September 2025 09:38:57 +0000 (0:00:00.829) 0:02:33.582 **** 2025-09-20 09:47:30.577273 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.577279 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.577286 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.577293 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.577299 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.577306 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.577312 | orchestrator | 2025-09-20 09:47:30.577319 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-20 09:47:30.577326 | orchestrator | Saturday 20 September 2025 09:38:58 +0000 (0:00:00.743) 0:02:34.325 **** 2025-09-20 09:47:30.577332 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.577339 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.577345 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.577352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.577358 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.577365 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.577371 | orchestrator | 2025-09-20 09:47:30.577378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-20 09:47:30.577385 | orchestrator | Saturday 20 September 2025 09:38:59 +0000 (0:00:00.814) 0:02:35.140 **** 2025-09-20 09:47:30.577391 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.577398 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.577405 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.577411 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.577417 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.577423 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.577429 | orchestrator | 2025-09-20 09:47:30.577435 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-20 09:47:30.577441 | orchestrator | Saturday 20 September 2025 09:39:00 +0000 (0:00:01.181) 0:02:36.322 **** 2025-09-20 09:47:30.577448 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.577454 | orchestrator | 2025-09-20 09:47:30.577460 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-20 09:47:30.577466 | orchestrator | Saturday 20 September 2025 09:39:01 +0000 (0:00:01.146) 0:02:37.469 **** 2025-09-20 09:47:30.577472 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-20 09:47:30.577479 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-20 09:47:30.577485 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-20 09:47:30.577491 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-20 09:47:30.577497 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-20 09:47:30.577503 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-20 09:47:30.577509 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-20 09:47:30.577515 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-20 09:47:30.577521 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-20 09:47:30.577527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-20 09:47:30.577534 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-20 09:47:30.577540 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-20 09:47:30.577546 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-20 09:47:30.577552 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-20 09:47:30.577561 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-20 09:47:30.577568 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-20 09:47:30.577574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-20 09:47:30.577580 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-20 09:47:30.577590 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-20 09:47:30.577597 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-20 09:47:30.577606 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-20 09:47:30.577613 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-20 09:47:30.577619 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-20 09:47:30.577625 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-20 09:47:30.577631 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-20 09:47:30.577637 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-20 09:47:30.577643 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-20 09:47:30.577650 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-20 09:47:30.577656 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-20 09:47:30.577662 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-20 09:47:30.577668 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-20 09:47:30.577674 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-20 09:47:30.577680 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-20 09:47:30.577686 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-20 09:47:30.577692 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-20 09:47:30.577698 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-20 09:47:30.577704 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-20 09:47:30.577711 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-20 09:47:30.577717 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-20 09:47:30.577723 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-20 09:47:30.577729 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-20 09:47:30.577735 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 09:47:30.577741 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-20 09:47:30.577747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-20 09:47:30.577753 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-20 09:47:30.577759 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-20 09:47:30.577765 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-20 09:47:30.577771 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 09:47:30.577778 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 09:47:30.577784 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 09:47:30.577790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-20 09:47:30.577796 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 09:47:30.577802 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-20 09:47:30.577808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 09:47:30.577814 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 09:47:30.577820 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 09:47:30.577827 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 09:47:30.577833 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 09:47:30.577839 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-20 09:47:30.577845 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 09:47:30.577861 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 09:47:30.577867 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 09:47:30.577874 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 09:47:30.577880 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 09:47:30.577886 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-20 09:47:30.577892 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 09:47:30.577898 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 09:47:30.577904 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 09:47:30.577911 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 09:47:30.577917 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 09:47:30.577923 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-20 09:47:30.577929 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 09:47:30.577948 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 09:47:30.577955 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 09:47:30.577961 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 09:47:30.577968 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 09:47:30.577974 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-20 09:47:30.577980 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 09:47:30.577989 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 09:47:30.577995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-20 09:47:30.578002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 09:47:30.578008 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 09:47:30.578097 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-20 09:47:30.578108 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 09:47:30.578114 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-20 09:47:30.578121 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-20 09:47:30.578127 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-20 09:47:30.578133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-20 09:47:30.578140 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-20 09:47:30.578146 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-20 09:47:30.578152 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-20 09:47:30.578158 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-20 09:47:30.578164 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-20 09:47:30.578170 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-20 09:47:30.578177 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-20 09:47:30.578183 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-20 09:47:30.578189 | orchestrator | 2025-09-20 09:47:30.578195 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-20 09:47:30.578202 | orchestrator | Saturday 20 September 2025 09:39:08 +0000 (0:00:07.227) 0:02:44.696 **** 2025-09-20 09:47:30.578208 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578214 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578220 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578232 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.578238 | orchestrator | 2025-09-20 09:47:30.578244 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-20 09:47:30.578251 | orchestrator | Saturday 20 September 2025 09:39:09 +0000 (0:00:00.827) 0:02:45.524 **** 2025-09-20 09:47:30.578257 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.578264 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.578270 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.578276 | orchestrator | 2025-09-20 09:47:30.578282 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-20 09:47:30.578289 | orchestrator | Saturday 20 September 2025 09:39:10 +0000 (0:00:00.608) 0:02:46.132 **** 2025-09-20 09:47:30.578295 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.578301 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.578308 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.578314 | orchestrator | 2025-09-20 09:47:30.578321 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-20 09:47:30.578327 | orchestrator | Saturday 20 September 2025 09:39:11 +0000 (0:00:01.392) 0:02:47.525 **** 2025-09-20 09:47:30.578333 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.578339 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.578345 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.578352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578358 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578364 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578370 | orchestrator | 2025-09-20 09:47:30.578377 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-20 09:47:30.578383 | orchestrator | Saturday 20 September 2025 09:39:12 +0000 (0:00:00.793) 0:02:48.318 **** 2025-09-20 09:47:30.578389 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.578395 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.578401 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.578407 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578414 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578420 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578426 | orchestrator | 2025-09-20 09:47:30.578432 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-20 09:47:30.578442 | orchestrator | Saturday 20 September 2025 09:39:13 +0000 (0:00:00.915) 0:02:49.233 **** 2025-09-20 09:47:30.578448 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578454 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578460 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578467 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578473 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578479 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578485 | orchestrator | 2025-09-20 09:47:30.578491 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-20 09:47:30.578497 | orchestrator | Saturday 20 September 2025 09:39:14 +0000 (0:00:01.211) 0:02:50.445 **** 2025-09-20 09:47:30.578523 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578530 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578536 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578547 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578553 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578559 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578565 | orchestrator | 2025-09-20 09:47:30.578571 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-20 09:47:30.578577 | orchestrator | Saturday 20 September 2025 09:39:15 +0000 (0:00:00.643) 0:02:51.088 **** 2025-09-20 09:47:30.578583 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578589 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578595 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578601 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578607 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578613 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578619 | orchestrator | 2025-09-20 09:47:30.578625 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-20 09:47:30.578632 | orchestrator | Saturday 20 September 2025 09:39:15 +0000 (0:00:00.749) 0:02:51.837 **** 2025-09-20 09:47:30.578638 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578644 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578650 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578656 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578662 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578668 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578674 | orchestrator | 2025-09-20 09:47:30.578680 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-20 09:47:30.578686 | orchestrator | Saturday 20 September 2025 09:39:16 +0000 (0:00:00.705) 0:02:52.543 **** 2025-09-20 09:47:30.578692 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578698 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578704 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578710 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578716 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578722 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578728 | orchestrator | 2025-09-20 09:47:30.578735 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-20 09:47:30.578741 | orchestrator | Saturday 20 September 2025 09:39:17 +0000 (0:00:01.176) 0:02:53.719 **** 2025-09-20 09:47:30.578747 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578753 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578759 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578765 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578771 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578777 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578783 | orchestrator | 2025-09-20 09:47:30.578789 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-20 09:47:30.578795 | orchestrator | Saturday 20 September 2025 09:39:18 +0000 (0:00:00.683) 0:02:54.402 **** 2025-09-20 09:47:30.578801 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578807 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578813 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578819 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.578825 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.578831 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.578837 | orchestrator | 2025-09-20 09:47:30.578844 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-20 09:47:30.578850 | orchestrator | Saturday 20 September 2025 09:39:21 +0000 (0:00:03.142) 0:02:57.544 **** 2025-09-20 09:47:30.578856 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.578862 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.578868 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.578874 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578880 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578890 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578896 | orchestrator | 2025-09-20 09:47:30.578902 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-20 09:47:30.578908 | orchestrator | Saturday 20 September 2025 09:39:22 +0000 (0:00:00.779) 0:02:58.324 **** 2025-09-20 09:47:30.578914 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.578921 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.578927 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.578933 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578939 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578945 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.578951 | orchestrator | 2025-09-20 09:47:30.578957 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-20 09:47:30.578963 | orchestrator | Saturday 20 September 2025 09:39:23 +0000 (0:00:00.783) 0:02:59.108 **** 2025-09-20 09:47:30.578969 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.578975 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.578981 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.578987 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.578993 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.578999 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579005 | orchestrator | 2025-09-20 09:47:30.579011 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-20 09:47:30.579017 | orchestrator | Saturday 20 September 2025 09:39:23 +0000 (0:00:00.689) 0:02:59.797 **** 2025-09-20 09:47:30.579026 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.579033 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.579039 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.579057 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579064 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579070 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579076 | orchestrator | 2025-09-20 09:47:30.579100 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-20 09:47:30.579107 | orchestrator | Saturday 20 September 2025 09:39:24 +0000 (0:00:00.889) 0:03:00.687 **** 2025-09-20 09:47:30.579115 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-20 09:47:30.579123 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-20 09:47:30.579131 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579137 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-20 09:47:30.579143 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-20 09:47:30.579150 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-20 09:47:30.579172 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-20 09:47:30.579178 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579185 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579191 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579197 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579203 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579209 | orchestrator | 2025-09-20 09:47:30.579215 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-20 09:47:30.579221 | orchestrator | Saturday 20 September 2025 09:39:25 +0000 (0:00:01.157) 0:03:01.844 **** 2025-09-20 09:47:30.579227 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579233 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579239 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579245 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579251 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579257 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579263 | orchestrator | 2025-09-20 09:47:30.579270 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-20 09:47:30.579276 | orchestrator | Saturday 20 September 2025 09:39:26 +0000 (0:00:01.151) 0:03:02.996 **** 2025-09-20 09:47:30.579282 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579288 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579294 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579300 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579306 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579312 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579318 | orchestrator | 2025-09-20 09:47:30.579324 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-20 09:47:30.579330 | orchestrator | Saturday 20 September 2025 09:39:27 +0000 (0:00:00.853) 0:03:03.849 **** 2025-09-20 09:47:30.579336 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579343 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579349 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579354 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579360 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579366 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579372 | orchestrator | 2025-09-20 09:47:30.579379 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-20 09:47:30.579385 | orchestrator | Saturday 20 September 2025 09:39:28 +0000 (0:00:00.927) 0:03:04.776 **** 2025-09-20 09:47:30.579405 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579411 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579417 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579423 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579429 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579435 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579441 | orchestrator | 2025-09-20 09:47:30.579447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-20 09:47:30.579454 | orchestrator | Saturday 20 September 2025 09:39:29 +0000 (0:00:00.507) 0:03:05.284 **** 2025-09-20 09:47:30.579460 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579483 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579490 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579496 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579507 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579513 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579519 | orchestrator | 2025-09-20 09:47:30.579525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-20 09:47:30.579531 | orchestrator | Saturday 20 September 2025 09:39:30 +0000 (0:00:01.008) 0:03:06.292 **** 2025-09-20 09:47:30.579537 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.579543 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579549 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.579555 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.579561 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579568 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579574 | orchestrator | 2025-09-20 09:47:30.579580 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-20 09:47:30.579586 | orchestrator | Saturday 20 September 2025 09:39:31 +0000 (0:00:00.869) 0:03:07.162 **** 2025-09-20 09:47:30.579592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.579598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.579604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.579610 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579616 | orchestrator | 2025-09-20 09:47:30.579622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-20 09:47:30.579628 | orchestrator | Saturday 20 September 2025 09:39:31 +0000 (0:00:00.524) 0:03:07.686 **** 2025-09-20 09:47:30.579634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.579640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.579646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.579652 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579658 | orchestrator | 2025-09-20 09:47:30.579664 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-20 09:47:30.579670 | orchestrator | Saturday 20 September 2025 09:39:32 +0000 (0:00:00.821) 0:03:08.508 **** 2025-09-20 09:47:30.579676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.579682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.579688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.579694 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579700 | orchestrator | 2025-09-20 09:47:30.579706 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-20 09:47:30.579712 | orchestrator | Saturday 20 September 2025 09:39:33 +0000 (0:00:00.983) 0:03:09.491 **** 2025-09-20 09:47:30.579718 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.579724 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.579730 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.579737 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579743 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579749 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579755 | orchestrator | 2025-09-20 09:47:30.579761 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-20 09:47:30.579767 | orchestrator | Saturday 20 September 2025 09:39:34 +0000 (0:00:00.721) 0:03:10.213 **** 2025-09-20 09:47:30.579773 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 09:47:30.579779 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-20 09:47:30.579785 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-20 09:47:30.579791 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-20 09:47:30.579797 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-20 09:47:30.579803 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.579809 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.579815 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-20 09:47:30.579821 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.579834 | orchestrator | 2025-09-20 09:47:30.579840 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-20 09:47:30.579846 | orchestrator | Saturday 20 September 2025 09:39:36 +0000 (0:00:02.631) 0:03:12.845 **** 2025-09-20 09:47:30.579853 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.579859 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.579865 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.579871 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.579877 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.579883 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.579889 | orchestrator | 2025-09-20 09:47:30.579895 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 09:47:30.579901 | orchestrator | Saturday 20 September 2025 09:39:40 +0000 (0:00:03.496) 0:03:16.342 **** 2025-09-20 09:47:30.579907 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.579913 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.579919 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.579925 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.579931 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.579937 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.579943 | orchestrator | 2025-09-20 09:47:30.579949 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-20 09:47:30.579955 | orchestrator | Saturday 20 September 2025 09:39:42 +0000 (0:00:02.103) 0:03:18.445 **** 2025-09-20 09:47:30.579961 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.579967 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.579977 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.579983 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.579989 | orchestrator | 2025-09-20 09:47:30.579995 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-20 09:47:30.580001 | orchestrator | Saturday 20 September 2025 09:39:43 +0000 (0:00:01.164) 0:03:19.610 **** 2025-09-20 09:47:30.580007 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.580013 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.580019 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.580025 | orchestrator | 2025-09-20 09:47:30.580080 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-20 09:47:30.580089 | orchestrator | Saturday 20 September 2025 09:39:44 +0000 (0:00:00.651) 0:03:20.261 **** 2025-09-20 09:47:30.580095 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.580101 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.580107 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.580113 | orchestrator | 2025-09-20 09:47:30.580120 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-20 09:47:30.580126 | orchestrator | Saturday 20 September 2025 09:39:45 +0000 (0:00:01.337) 0:03:21.599 **** 2025-09-20 09:47:30.580132 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 09:47:30.580138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 09:47:30.580144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 09:47:30.580150 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.580156 | orchestrator | 2025-09-20 09:47:30.580162 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-20 09:47:30.580168 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:00.875) 0:03:22.475 **** 2025-09-20 09:47:30.580174 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.580180 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.580186 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.580192 | orchestrator | 2025-09-20 09:47:30.580199 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-20 09:47:30.580205 | orchestrator | Saturday 20 September 2025 09:39:46 +0000 (0:00:00.364) 0:03:22.840 **** 2025-09-20 09:47:30.580211 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.580222 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.580228 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.580234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.580240 | orchestrator | 2025-09-20 09:47:30.580246 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-20 09:47:30.580252 | orchestrator | Saturday 20 September 2025 09:39:48 +0000 (0:00:01.375) 0:03:24.215 **** 2025-09-20 09:47:30.580258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.580265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.580271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.580277 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580283 | orchestrator | 2025-09-20 09:47:30.580289 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-20 09:47:30.580295 | orchestrator | Saturday 20 September 2025 09:39:48 +0000 (0:00:00.406) 0:03:24.622 **** 2025-09-20 09:47:30.580301 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580306 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.580312 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.580317 | orchestrator | 2025-09-20 09:47:30.580322 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-20 09:47:30.580328 | orchestrator | Saturday 20 September 2025 09:39:49 +0000 (0:00:00.557) 0:03:25.180 **** 2025-09-20 09:47:30.580333 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580338 | orchestrator | 2025-09-20 09:47:30.580344 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-20 09:47:30.580349 | orchestrator | Saturday 20 September 2025 09:39:49 +0000 (0:00:00.232) 0:03:25.412 **** 2025-09-20 09:47:30.580354 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580360 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.580365 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.580370 | orchestrator | 2025-09-20 09:47:30.580375 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-20 09:47:30.580381 | orchestrator | Saturday 20 September 2025 09:39:49 +0000 (0:00:00.409) 0:03:25.821 **** 2025-09-20 09:47:30.580386 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580391 | orchestrator | 2025-09-20 09:47:30.580397 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-20 09:47:30.580402 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.242) 0:03:26.064 **** 2025-09-20 09:47:30.580407 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580412 | orchestrator | 2025-09-20 09:47:30.580418 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-20 09:47:30.580423 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.279) 0:03:26.343 **** 2025-09-20 09:47:30.580428 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580434 | orchestrator | 2025-09-20 09:47:30.580439 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-20 09:47:30.580444 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.157) 0:03:26.501 **** 2025-09-20 09:47:30.580449 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580455 | orchestrator | 2025-09-20 09:47:30.580460 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-20 09:47:30.580465 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.233) 0:03:26.735 **** 2025-09-20 09:47:30.580471 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580476 | orchestrator | 2025-09-20 09:47:30.580481 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-20 09:47:30.580486 | orchestrator | Saturday 20 September 2025 09:39:50 +0000 (0:00:00.227) 0:03:26.963 **** 2025-09-20 09:47:30.580495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.580500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.580509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.580515 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580520 | orchestrator | 2025-09-20 09:47:30.580525 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-20 09:47:30.580531 | orchestrator | Saturday 20 September 2025 09:39:51 +0000 (0:00:00.887) 0:03:27.850 **** 2025-09-20 09:47:30.580536 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580556 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.580562 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.580567 | orchestrator | 2025-09-20 09:47:30.580573 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-20 09:47:30.580578 | orchestrator | Saturday 20 September 2025 09:39:52 +0000 (0:00:00.744) 0:03:28.594 **** 2025-09-20 09:47:30.580583 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580589 | orchestrator | 2025-09-20 09:47:30.580594 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-20 09:47:30.580599 | orchestrator | Saturday 20 September 2025 09:39:52 +0000 (0:00:00.306) 0:03:28.900 **** 2025-09-20 09:47:30.580604 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580610 | orchestrator | 2025-09-20 09:47:30.580615 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-20 09:47:30.580620 | orchestrator | Saturday 20 September 2025 09:39:53 +0000 (0:00:00.246) 0:03:29.147 **** 2025-09-20 09:47:30.580626 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.580631 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.580636 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.580641 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.580647 | orchestrator | 2025-09-20 09:47:30.580652 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-20 09:47:30.580658 | orchestrator | Saturday 20 September 2025 09:39:54 +0000 (0:00:00.991) 0:03:30.138 **** 2025-09-20 09:47:30.580663 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.580668 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.580674 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.580679 | orchestrator | 2025-09-20 09:47:30.580684 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-20 09:47:30.580690 | orchestrator | Saturday 20 September 2025 09:39:54 +0000 (0:00:00.604) 0:03:30.743 **** 2025-09-20 09:47:30.580695 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.580700 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.580706 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.580711 | orchestrator | 2025-09-20 09:47:30.580716 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-20 09:47:30.580722 | orchestrator | Saturday 20 September 2025 09:39:56 +0000 (0:00:01.374) 0:03:32.117 **** 2025-09-20 09:47:30.580727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.580732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.580738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.580743 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580748 | orchestrator | 2025-09-20 09:47:30.580754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-20 09:47:30.580759 | orchestrator | Saturday 20 September 2025 09:39:56 +0000 (0:00:00.878) 0:03:32.996 **** 2025-09-20 09:47:30.580764 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.580770 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.580775 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.580780 | orchestrator | 2025-09-20 09:47:30.580786 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-20 09:47:30.580791 | orchestrator | Saturday 20 September 2025 09:39:57 +0000 (0:00:00.606) 0:03:33.603 **** 2025-09-20 09:47:30.580796 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.580804 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.580810 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.580815 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.580820 | orchestrator | 2025-09-20 09:47:30.580826 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-20 09:47:30.580831 | orchestrator | Saturday 20 September 2025 09:39:58 +0000 (0:00:01.271) 0:03:34.874 **** 2025-09-20 09:47:30.580836 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.580842 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.580847 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.580852 | orchestrator | 2025-09-20 09:47:30.580858 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-20 09:47:30.580863 | orchestrator | Saturday 20 September 2025 09:39:59 +0000 (0:00:00.380) 0:03:35.255 **** 2025-09-20 09:47:30.580868 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.580874 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.580879 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.580884 | orchestrator | 2025-09-20 09:47:30.580890 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-20 09:47:30.580895 | orchestrator | Saturday 20 September 2025 09:40:01 +0000 (0:00:01.871) 0:03:37.127 **** 2025-09-20 09:47:30.580900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.580906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.580911 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.580916 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580921 | orchestrator | 2025-09-20 09:47:30.580927 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-20 09:47:30.580932 | orchestrator | Saturday 20 September 2025 09:40:01 +0000 (0:00:00.592) 0:03:37.719 **** 2025-09-20 09:47:30.580937 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.580943 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.580948 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.580953 | orchestrator | 2025-09-20 09:47:30.580961 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-20 09:47:30.580967 | orchestrator | Saturday 20 September 2025 09:40:02 +0000 (0:00:00.350) 0:03:38.069 **** 2025-09-20 09:47:30.580972 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.580977 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.580983 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.580988 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.580993 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.580999 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581004 | orchestrator | 2025-09-20 09:47:30.581009 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-20 09:47:30.581028 | orchestrator | Saturday 20 September 2025 09:40:03 +0000 (0:00:01.257) 0:03:39.327 **** 2025-09-20 09:47:30.581034 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.581040 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.581057 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.581063 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.581068 | orchestrator | 2025-09-20 09:47:30.581074 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-20 09:47:30.581079 | orchestrator | Saturday 20 September 2025 09:40:04 +0000 (0:00:00.979) 0:03:40.306 **** 2025-09-20 09:47:30.581084 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581089 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581095 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581100 | orchestrator | 2025-09-20 09:47:30.581106 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-20 09:47:30.581111 | orchestrator | Saturday 20 September 2025 09:40:04 +0000 (0:00:00.363) 0:03:40.669 **** 2025-09-20 09:47:30.581121 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.581127 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.581132 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.581137 | orchestrator | 2025-09-20 09:47:30.581143 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-20 09:47:30.581148 | orchestrator | Saturday 20 September 2025 09:40:06 +0000 (0:00:01.846) 0:03:42.516 **** 2025-09-20 09:47:30.581154 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 09:47:30.581159 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 09:47:30.581164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 09:47:30.581169 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581175 | orchestrator | 2025-09-20 09:47:30.581180 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-20 09:47:30.581186 | orchestrator | Saturday 20 September 2025 09:40:07 +0000 (0:00:00.662) 0:03:43.178 **** 2025-09-20 09:47:30.581191 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581196 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581202 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581207 | orchestrator | 2025-09-20 09:47:30.581212 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-20 09:47:30.581218 | orchestrator | 2025-09-20 09:47:30.581223 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.581228 | orchestrator | Saturday 20 September 2025 09:40:07 +0000 (0:00:00.799) 0:03:43.978 **** 2025-09-20 09:47:30.581234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.581239 | orchestrator | 2025-09-20 09:47:30.581245 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.581250 | orchestrator | Saturday 20 September 2025 09:40:08 +0000 (0:00:00.669) 0:03:44.647 **** 2025-09-20 09:47:30.581255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.581261 | orchestrator | 2025-09-20 09:47:30.581266 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.581271 | orchestrator | Saturday 20 September 2025 09:40:09 +0000 (0:00:00.643) 0:03:45.290 **** 2025-09-20 09:47:30.581277 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581282 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581287 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581293 | orchestrator | 2025-09-20 09:47:30.581298 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.581304 | orchestrator | Saturday 20 September 2025 09:40:10 +0000 (0:00:00.772) 0:03:46.062 **** 2025-09-20 09:47:30.581309 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581314 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581319 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581325 | orchestrator | 2025-09-20 09:47:30.581330 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.581336 | orchestrator | Saturday 20 September 2025 09:40:10 +0000 (0:00:00.443) 0:03:46.506 **** 2025-09-20 09:47:30.581341 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581346 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581351 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581357 | orchestrator | 2025-09-20 09:47:30.581362 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.581367 | orchestrator | Saturday 20 September 2025 09:40:10 +0000 (0:00:00.291) 0:03:46.797 **** 2025-09-20 09:47:30.581373 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581378 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581383 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581388 | orchestrator | 2025-09-20 09:47:30.581399 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.581404 | orchestrator | Saturday 20 September 2025 09:40:11 +0000 (0:00:00.307) 0:03:47.105 **** 2025-09-20 09:47:30.581410 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581415 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581420 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581426 | orchestrator | 2025-09-20 09:47:30.581431 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.581439 | orchestrator | Saturday 20 September 2025 09:40:11 +0000 (0:00:00.714) 0:03:47.820 **** 2025-09-20 09:47:30.581445 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581450 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581455 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581461 | orchestrator | 2025-09-20 09:47:30.581466 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.581471 | orchestrator | Saturday 20 September 2025 09:40:12 +0000 (0:00:00.367) 0:03:48.188 **** 2025-09-20 09:47:30.581477 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581482 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581488 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581493 | orchestrator | 2025-09-20 09:47:30.581512 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.581519 | orchestrator | Saturday 20 September 2025 09:40:12 +0000 (0:00:00.529) 0:03:48.718 **** 2025-09-20 09:47:30.581524 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581529 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581535 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581540 | orchestrator | 2025-09-20 09:47:30.581546 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.581551 | orchestrator | Saturday 20 September 2025 09:40:13 +0000 (0:00:00.837) 0:03:49.556 **** 2025-09-20 09:47:30.581556 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581562 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581567 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581572 | orchestrator | 2025-09-20 09:47:30.581578 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.581583 | orchestrator | Saturday 20 September 2025 09:40:14 +0000 (0:00:00.757) 0:03:50.313 **** 2025-09-20 09:47:30.581588 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581594 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581599 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581604 | orchestrator | 2025-09-20 09:47:30.581610 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.581615 | orchestrator | Saturday 20 September 2025 09:40:14 +0000 (0:00:00.345) 0:03:50.658 **** 2025-09-20 09:47:30.581620 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581626 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581631 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581636 | orchestrator | 2025-09-20 09:47:30.581642 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.581647 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:00.559) 0:03:51.218 **** 2025-09-20 09:47:30.581652 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581657 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581663 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581668 | orchestrator | 2025-09-20 09:47:30.581674 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.581679 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:00.416) 0:03:51.634 **** 2025-09-20 09:47:30.581684 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581690 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581695 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581700 | orchestrator | 2025-09-20 09:47:30.581706 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.581715 | orchestrator | Saturday 20 September 2025 09:40:15 +0000 (0:00:00.357) 0:03:51.991 **** 2025-09-20 09:47:30.581721 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581726 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581731 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581737 | orchestrator | 2025-09-20 09:47:30.581742 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.581747 | orchestrator | Saturday 20 September 2025 09:40:16 +0000 (0:00:00.379) 0:03:52.371 **** 2025-09-20 09:47:30.581753 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581758 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581763 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581769 | orchestrator | 2025-09-20 09:47:30.581774 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.581780 | orchestrator | Saturday 20 September 2025 09:40:16 +0000 (0:00:00.420) 0:03:52.792 **** 2025-09-20 09:47:30.581785 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581790 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.581796 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.581801 | orchestrator | 2025-09-20 09:47:30.581806 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.581812 | orchestrator | Saturday 20 September 2025 09:40:17 +0000 (0:00:00.285) 0:03:53.077 **** 2025-09-20 09:47:30.581817 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581823 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581828 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581833 | orchestrator | 2025-09-20 09:47:30.581839 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.581844 | orchestrator | Saturday 20 September 2025 09:40:17 +0000 (0:00:00.337) 0:03:53.414 **** 2025-09-20 09:47:30.581850 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581855 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581860 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581865 | orchestrator | 2025-09-20 09:47:30.581871 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.581876 | orchestrator | Saturday 20 September 2025 09:40:17 +0000 (0:00:00.458) 0:03:53.873 **** 2025-09-20 09:47:30.581882 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581887 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581892 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581897 | orchestrator | 2025-09-20 09:47:30.581903 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-20 09:47:30.581908 | orchestrator | Saturday 20 September 2025 09:40:18 +0000 (0:00:00.641) 0:03:54.514 **** 2025-09-20 09:47:30.581913 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.581919 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.581924 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.581929 | orchestrator | 2025-09-20 09:47:30.581935 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-20 09:47:30.581940 | orchestrator | Saturday 20 September 2025 09:40:18 +0000 (0:00:00.375) 0:03:54.890 **** 2025-09-20 09:47:30.581948 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.581954 | orchestrator | 2025-09-20 09:47:30.581959 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-20 09:47:30.581965 | orchestrator | Saturday 20 September 2025 09:40:19 +0000 (0:00:00.549) 0:03:55.439 **** 2025-09-20 09:47:30.581970 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.581975 | orchestrator | 2025-09-20 09:47:30.581981 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-20 09:47:30.581999 | orchestrator | Saturday 20 September 2025 09:40:19 +0000 (0:00:00.385) 0:03:55.825 **** 2025-09-20 09:47:30.582006 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-20 09:47:30.582011 | orchestrator | 2025-09-20 09:47:30.582034 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-20 09:47:30.582055 | orchestrator | Saturday 20 September 2025 09:40:20 +0000 (0:00:01.113) 0:03:56.939 **** 2025-09-20 09:47:30.582061 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582066 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.582072 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.582077 | orchestrator | 2025-09-20 09:47:30.582083 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-20 09:47:30.582088 | orchestrator | Saturday 20 September 2025 09:40:21 +0000 (0:00:00.406) 0:03:57.345 **** 2025-09-20 09:47:30.582093 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582099 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.582104 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.582109 | orchestrator | 2025-09-20 09:47:30.582115 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-20 09:47:30.582120 | orchestrator | Saturday 20 September 2025 09:40:21 +0000 (0:00:00.383) 0:03:57.728 **** 2025-09-20 09:47:30.582125 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582131 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582136 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582141 | orchestrator | 2025-09-20 09:47:30.582147 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-20 09:47:30.582152 | orchestrator | Saturday 20 September 2025 09:40:23 +0000 (0:00:01.311) 0:03:59.040 **** 2025-09-20 09:47:30.582157 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582163 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582168 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582173 | orchestrator | 2025-09-20 09:47:30.582179 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-20 09:47:30.582184 | orchestrator | Saturday 20 September 2025 09:40:24 +0000 (0:00:01.139) 0:04:00.180 **** 2025-09-20 09:47:30.582189 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582195 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582200 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582205 | orchestrator | 2025-09-20 09:47:30.582211 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-20 09:47:30.582216 | orchestrator | Saturday 20 September 2025 09:40:24 +0000 (0:00:00.673) 0:04:00.853 **** 2025-09-20 09:47:30.582221 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582227 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.582232 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.582237 | orchestrator | 2025-09-20 09:47:30.582243 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-20 09:47:30.582248 | orchestrator | Saturday 20 September 2025 09:40:25 +0000 (0:00:00.733) 0:04:01.587 **** 2025-09-20 09:47:30.582253 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582259 | orchestrator | 2025-09-20 09:47:30.582264 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-20 09:47:30.582269 | orchestrator | Saturday 20 September 2025 09:40:26 +0000 (0:00:01.285) 0:04:02.872 **** 2025-09-20 09:47:30.582275 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582280 | orchestrator | 2025-09-20 09:47:30.582285 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-20 09:47:30.582291 | orchestrator | Saturday 20 September 2025 09:40:27 +0000 (0:00:00.655) 0:04:03.528 **** 2025-09-20 09:47:30.582296 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:47:30.582301 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.582307 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.582312 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:47:30.582317 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-20 09:47:30.582323 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:47:30.582328 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:47:30.582337 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-20 09:47:30.582342 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-20 09:47:30.582348 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-20 09:47:30.582353 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:47:30.582358 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-20 09:47:30.582364 | orchestrator | 2025-09-20 09:47:30.582369 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-20 09:47:30.582375 | orchestrator | Saturday 20 September 2025 09:40:30 +0000 (0:00:03.377) 0:04:06.905 **** 2025-09-20 09:47:30.582380 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582385 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582391 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582396 | orchestrator | 2025-09-20 09:47:30.582401 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-20 09:47:30.582407 | orchestrator | Saturday 20 September 2025 09:40:32 +0000 (0:00:01.639) 0:04:08.544 **** 2025-09-20 09:47:30.582412 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582417 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.582423 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.582428 | orchestrator | 2025-09-20 09:47:30.582433 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-20 09:47:30.582442 | orchestrator | Saturday 20 September 2025 09:40:32 +0000 (0:00:00.371) 0:04:08.916 **** 2025-09-20 09:47:30.582448 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582453 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.582458 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.582464 | orchestrator | 2025-09-20 09:47:30.582469 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-20 09:47:30.582475 | orchestrator | Saturday 20 September 2025 09:40:33 +0000 (0:00:00.368) 0:04:09.284 **** 2025-09-20 09:47:30.582480 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582485 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582491 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582496 | orchestrator | 2025-09-20 09:47:30.582517 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-20 09:47:30.582524 | orchestrator | Saturday 20 September 2025 09:40:35 +0000 (0:00:01.948) 0:04:11.233 **** 2025-09-20 09:47:30.582529 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582535 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582540 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582545 | orchestrator | 2025-09-20 09:47:30.582550 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-20 09:47:30.582556 | orchestrator | Saturday 20 September 2025 09:40:37 +0000 (0:00:01.798) 0:04:13.032 **** 2025-09-20 09:47:30.582561 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.582566 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.582572 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.582577 | orchestrator | 2025-09-20 09:47:30.582583 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-20 09:47:30.582588 | orchestrator | Saturday 20 September 2025 09:40:37 +0000 (0:00:00.473) 0:04:13.505 **** 2025-09-20 09:47:30.582593 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.582599 | orchestrator | 2025-09-20 09:47:30.582604 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-20 09:47:30.582609 | orchestrator | Saturday 20 September 2025 09:40:38 +0000 (0:00:00.542) 0:04:14.048 **** 2025-09-20 09:47:30.582615 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.582620 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.582625 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.582631 | orchestrator | 2025-09-20 09:47:30.582636 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-20 09:47:30.582647 | orchestrator | Saturday 20 September 2025 09:40:38 +0000 (0:00:00.590) 0:04:14.639 **** 2025-09-20 09:47:30.582653 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.582658 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.582663 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.582669 | orchestrator | 2025-09-20 09:47:30.582674 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-20 09:47:30.582679 | orchestrator | Saturday 20 September 2025 09:40:38 +0000 (0:00:00.331) 0:04:14.970 **** 2025-09-20 09:47:30.582685 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.582690 | orchestrator | 2025-09-20 09:47:30.582696 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-20 09:47:30.582701 | orchestrator | Saturday 20 September 2025 09:40:39 +0000 (0:00:00.538) 0:04:15.509 **** 2025-09-20 09:47:30.582706 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582712 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582717 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582722 | orchestrator | 2025-09-20 09:47:30.582728 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-20 09:47:30.582733 | orchestrator | Saturday 20 September 2025 09:40:42 +0000 (0:00:02.492) 0:04:18.001 **** 2025-09-20 09:47:30.582739 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582744 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582749 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582755 | orchestrator | 2025-09-20 09:47:30.582760 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-20 09:47:30.582765 | orchestrator | Saturday 20 September 2025 09:40:43 +0000 (0:00:01.200) 0:04:19.202 **** 2025-09-20 09:47:30.582771 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582776 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582781 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582787 | orchestrator | 2025-09-20 09:47:30.582792 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-20 09:47:30.582797 | orchestrator | Saturday 20 September 2025 09:40:44 +0000 (0:00:01.683) 0:04:20.886 **** 2025-09-20 09:47:30.582803 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.582808 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.582814 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.582819 | orchestrator | 2025-09-20 09:47:30.582824 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-20 09:47:30.582830 | orchestrator | Saturday 20 September 2025 09:40:46 +0000 (0:00:01.937) 0:04:22.823 **** 2025-09-20 09:47:30.582835 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.582840 | orchestrator | 2025-09-20 09:47:30.582846 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-20 09:47:30.582851 | orchestrator | Saturday 20 September 2025 09:40:47 +0000 (0:00:00.595) 0:04:23.419 **** 2025-09-20 09:47:30.582856 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-20 09:47:30.582862 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582867 | orchestrator | 2025-09-20 09:47:30.582872 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-20 09:47:30.582878 | orchestrator | Saturday 20 September 2025 09:41:09 +0000 (0:00:21.954) 0:04:45.373 **** 2025-09-20 09:47:30.582883 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.582889 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.582894 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.582899 | orchestrator | 2025-09-20 09:47:30.582905 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-20 09:47:30.582913 | orchestrator | Saturday 20 September 2025 09:41:18 +0000 (0:00:09.529) 0:04:54.903 **** 2025-09-20 09:47:30.582918 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.582927 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.582933 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.582938 | orchestrator | 2025-09-20 09:47:30.582944 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-20 09:47:30.582949 | orchestrator | Saturday 20 September 2025 09:41:19 +0000 (0:00:00.299) 0:04:55.202 **** 2025-09-20 09:47:30.582968 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-20 09:47:30.582976 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-20 09:47:30.582982 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-20 09:47:30.582988 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-20 09:47:30.582994 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-20 09:47:30.583000 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__283bb50aa2b2704f59accf6963babe59e612f3b3'}])  2025-09-20 09:47:30.583006 | orchestrator | 2025-09-20 09:47:30.583012 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 09:47:30.583017 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:15.074) 0:05:10.276 **** 2025-09-20 09:47:30.583023 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583028 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583033 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583039 | orchestrator | 2025-09-20 09:47:30.583054 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-20 09:47:30.583060 | orchestrator | Saturday 20 September 2025 09:41:34 +0000 (0:00:00.360) 0:05:10.637 **** 2025-09-20 09:47:30.583065 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.583071 | orchestrator | 2025-09-20 09:47:30.583076 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-20 09:47:30.583081 | orchestrator | Saturday 20 September 2025 09:41:35 +0000 (0:00:00.749) 0:05:11.387 **** 2025-09-20 09:47:30.583087 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583092 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583102 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583107 | orchestrator | 2025-09-20 09:47:30.583112 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-20 09:47:30.583118 | orchestrator | Saturday 20 September 2025 09:41:35 +0000 (0:00:00.383) 0:05:11.771 **** 2025-09-20 09:47:30.583123 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583128 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583134 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583139 | orchestrator | 2025-09-20 09:47:30.583144 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-20 09:47:30.583150 | orchestrator | Saturday 20 September 2025 09:41:36 +0000 (0:00:00.392) 0:05:12.164 **** 2025-09-20 09:47:30.583155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 09:47:30.583160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 09:47:30.583169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 09:47:30.583174 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583179 | orchestrator | 2025-09-20 09:47:30.583185 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-20 09:47:30.583190 | orchestrator | Saturday 20 September 2025 09:41:36 +0000 (0:00:00.715) 0:05:12.879 **** 2025-09-20 09:47:30.583195 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583201 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583206 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583211 | orchestrator | 2025-09-20 09:47:30.583231 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-20 09:47:30.583237 | orchestrator | 2025-09-20 09:47:30.583242 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.583248 | orchestrator | Saturday 20 September 2025 09:41:38 +0000 (0:00:01.117) 0:05:13.997 **** 2025-09-20 09:47:30.583253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.583259 | orchestrator | 2025-09-20 09:47:30.583264 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.583270 | orchestrator | Saturday 20 September 2025 09:41:38 +0000 (0:00:00.524) 0:05:14.521 **** 2025-09-20 09:47:30.583275 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.583280 | orchestrator | 2025-09-20 09:47:30.583286 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.583291 | orchestrator | Saturday 20 September 2025 09:41:39 +0000 (0:00:00.557) 0:05:15.079 **** 2025-09-20 09:47:30.583296 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583302 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583307 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583312 | orchestrator | 2025-09-20 09:47:30.583318 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.583323 | orchestrator | Saturday 20 September 2025 09:41:40 +0000 (0:00:01.051) 0:05:16.130 **** 2025-09-20 09:47:30.583328 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583334 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583339 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583344 | orchestrator | 2025-09-20 09:47:30.583350 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.583355 | orchestrator | Saturday 20 September 2025 09:41:40 +0000 (0:00:00.366) 0:05:16.497 **** 2025-09-20 09:47:30.583360 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583366 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583371 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583376 | orchestrator | 2025-09-20 09:47:30.583382 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.583387 | orchestrator | Saturday 20 September 2025 09:41:40 +0000 (0:00:00.299) 0:05:16.797 **** 2025-09-20 09:47:30.583397 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583402 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583407 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583413 | orchestrator | 2025-09-20 09:47:30.583418 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.583423 | orchestrator | Saturday 20 September 2025 09:41:41 +0000 (0:00:00.397) 0:05:17.194 **** 2025-09-20 09:47:30.583429 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583434 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583439 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583445 | orchestrator | 2025-09-20 09:47:30.583450 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.583455 | orchestrator | Saturday 20 September 2025 09:41:42 +0000 (0:00:01.124) 0:05:18.319 **** 2025-09-20 09:47:30.583461 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583466 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583471 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583477 | orchestrator | 2025-09-20 09:47:30.583482 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.583487 | orchestrator | Saturday 20 September 2025 09:41:42 +0000 (0:00:00.386) 0:05:18.706 **** 2025-09-20 09:47:30.583493 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583498 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583503 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583509 | orchestrator | 2025-09-20 09:47:30.583514 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.583519 | orchestrator | Saturday 20 September 2025 09:41:43 +0000 (0:00:00.323) 0:05:19.030 **** 2025-09-20 09:47:30.583525 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583530 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583535 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583541 | orchestrator | 2025-09-20 09:47:30.583546 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.583551 | orchestrator | Saturday 20 September 2025 09:41:43 +0000 (0:00:00.759) 0:05:19.789 **** 2025-09-20 09:47:30.583557 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583562 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583567 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583573 | orchestrator | 2025-09-20 09:47:30.583578 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.583583 | orchestrator | Saturday 20 September 2025 09:41:44 +0000 (0:00:01.055) 0:05:20.845 **** 2025-09-20 09:47:30.583589 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583594 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583599 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583605 | orchestrator | 2025-09-20 09:47:30.583610 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.583615 | orchestrator | Saturday 20 September 2025 09:41:45 +0000 (0:00:00.358) 0:05:21.203 **** 2025-09-20 09:47:30.583621 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583626 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583631 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583637 | orchestrator | 2025-09-20 09:47:30.583642 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.583650 | orchestrator | Saturday 20 September 2025 09:41:45 +0000 (0:00:00.383) 0:05:21.586 **** 2025-09-20 09:47:30.583655 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583661 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583666 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583671 | orchestrator | 2025-09-20 09:47:30.583677 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.583682 | orchestrator | Saturday 20 September 2025 09:41:45 +0000 (0:00:00.319) 0:05:21.905 **** 2025-09-20 09:47:30.583688 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583697 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583715 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583721 | orchestrator | 2025-09-20 09:47:30.583727 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.583732 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:00.465) 0:05:22.370 **** 2025-09-20 09:47:30.583737 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583743 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583748 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583753 | orchestrator | 2025-09-20 09:47:30.583759 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.583764 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:00.271) 0:05:22.642 **** 2025-09-20 09:47:30.583769 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583775 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583780 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583785 | orchestrator | 2025-09-20 09:47:30.583791 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.583796 | orchestrator | Saturday 20 September 2025 09:41:46 +0000 (0:00:00.279) 0:05:22.922 **** 2025-09-20 09:47:30.583801 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.583807 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.583812 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.583817 | orchestrator | 2025-09-20 09:47:30.583823 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.583828 | orchestrator | Saturday 20 September 2025 09:41:47 +0000 (0:00:00.265) 0:05:23.187 **** 2025-09-20 09:47:30.583833 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583839 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583844 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583849 | orchestrator | 2025-09-20 09:47:30.583855 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.583860 | orchestrator | Saturday 20 September 2025 09:41:47 +0000 (0:00:00.284) 0:05:23.472 **** 2025-09-20 09:47:30.583865 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583871 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583876 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583881 | orchestrator | 2025-09-20 09:47:30.583887 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.583892 | orchestrator | Saturday 20 September 2025 09:41:47 +0000 (0:00:00.472) 0:05:23.945 **** 2025-09-20 09:47:30.583897 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.583903 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.583908 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.583913 | orchestrator | 2025-09-20 09:47:30.583919 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-20 09:47:30.583924 | orchestrator | Saturday 20 September 2025 09:41:48 +0000 (0:00:00.520) 0:05:24.466 **** 2025-09-20 09:47:30.583930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 09:47:30.583935 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:47:30.583940 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:47:30.583946 | orchestrator | 2025-09-20 09:47:30.583951 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-20 09:47:30.583956 | orchestrator | Saturday 20 September 2025 09:41:49 +0000 (0:00:00.681) 0:05:25.148 **** 2025-09-20 09:47:30.583962 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.583967 | orchestrator | 2025-09-20 09:47:30.583973 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-20 09:47:30.583978 | orchestrator | Saturday 20 September 2025 09:41:49 +0000 (0:00:00.654) 0:05:25.802 **** 2025-09-20 09:47:30.583983 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.583989 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.583998 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584003 | orchestrator | 2025-09-20 09:47:30.584008 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-20 09:47:30.584014 | orchestrator | Saturday 20 September 2025 09:41:50 +0000 (0:00:00.648) 0:05:26.451 **** 2025-09-20 09:47:30.584019 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584024 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.584030 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.584035 | orchestrator | 2025-09-20 09:47:30.584040 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-20 09:47:30.584071 | orchestrator | Saturday 20 September 2025 09:41:50 +0000 (0:00:00.346) 0:05:26.798 **** 2025-09-20 09:47:30.584077 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:47:30.584082 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:47:30.584088 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:47:30.584093 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-20 09:47:30.584098 | orchestrator | 2025-09-20 09:47:30.584104 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-20 09:47:30.584109 | orchestrator | Saturday 20 September 2025 09:42:01 +0000 (0:00:10.620) 0:05:37.418 **** 2025-09-20 09:47:30.584115 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.584120 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.584125 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.584131 | orchestrator | 2025-09-20 09:47:30.584136 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-20 09:47:30.584141 | orchestrator | Saturday 20 September 2025 09:42:01 +0000 (0:00:00.458) 0:05:37.876 **** 2025-09-20 09:47:30.584150 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-20 09:47:30.584155 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 09:47:30.584161 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 09:47:30.584166 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-20 09:47:30.584171 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.584177 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.584182 | orchestrator | 2025-09-20 09:47:30.584204 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-20 09:47:30.584210 | orchestrator | Saturday 20 September 2025 09:42:04 +0000 (0:00:02.124) 0:05:40.001 **** 2025-09-20 09:47:30.584215 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-20 09:47:30.584221 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 09:47:30.584226 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 09:47:30.584232 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:47:30.584237 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-20 09:47:30.584242 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-20 09:47:30.584247 | orchestrator | 2025-09-20 09:47:30.584253 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-20 09:47:30.584258 | orchestrator | Saturday 20 September 2025 09:42:05 +0000 (0:00:01.219) 0:05:41.220 **** 2025-09-20 09:47:30.584263 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.584269 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.584274 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.584279 | orchestrator | 2025-09-20 09:47:30.584285 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-20 09:47:30.584290 | orchestrator | Saturday 20 September 2025 09:42:05 +0000 (0:00:00.674) 0:05:41.895 **** 2025-09-20 09:47:30.584295 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584301 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.584306 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.584311 | orchestrator | 2025-09-20 09:47:30.584317 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-20 09:47:30.584326 | orchestrator | Saturday 20 September 2025 09:42:06 +0000 (0:00:00.590) 0:05:42.486 **** 2025-09-20 09:47:30.584331 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584337 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.584342 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.584347 | orchestrator | 2025-09-20 09:47:30.584352 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-20 09:47:30.584358 | orchestrator | Saturday 20 September 2025 09:42:06 +0000 (0:00:00.346) 0:05:42.832 **** 2025-09-20 09:47:30.584363 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.584368 | orchestrator | 2025-09-20 09:47:30.584374 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-20 09:47:30.584379 | orchestrator | Saturday 20 September 2025 09:42:07 +0000 (0:00:00.537) 0:05:43.369 **** 2025-09-20 09:47:30.584384 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584390 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.584395 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.584400 | orchestrator | 2025-09-20 09:47:30.584406 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-20 09:47:30.584411 | orchestrator | Saturday 20 September 2025 09:42:07 +0000 (0:00:00.320) 0:05:43.690 **** 2025-09-20 09:47:30.584416 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584422 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.584427 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.584432 | orchestrator | 2025-09-20 09:47:30.584437 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-20 09:47:30.584443 | orchestrator | Saturday 20 September 2025 09:42:08 +0000 (0:00:00.628) 0:05:44.319 **** 2025-09-20 09:47:30.584448 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.584453 | orchestrator | 2025-09-20 09:47:30.584459 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-20 09:47:30.584464 | orchestrator | Saturday 20 September 2025 09:42:08 +0000 (0:00:00.524) 0:05:44.843 **** 2025-09-20 09:47:30.584468 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.584473 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.584478 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584483 | orchestrator | 2025-09-20 09:47:30.584488 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-20 09:47:30.584492 | orchestrator | Saturday 20 September 2025 09:42:10 +0000 (0:00:01.245) 0:05:46.088 **** 2025-09-20 09:47:30.584497 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.584502 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.584506 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584511 | orchestrator | 2025-09-20 09:47:30.584516 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-20 09:47:30.584521 | orchestrator | Saturday 20 September 2025 09:42:11 +0000 (0:00:01.460) 0:05:47.549 **** 2025-09-20 09:47:30.584525 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.584530 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.584535 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584539 | orchestrator | 2025-09-20 09:47:30.584544 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-20 09:47:30.584549 | orchestrator | Saturday 20 September 2025 09:42:13 +0000 (0:00:01.766) 0:05:49.315 **** 2025-09-20 09:47:30.584554 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.584558 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.584563 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584568 | orchestrator | 2025-09-20 09:47:30.584572 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-20 09:47:30.584577 | orchestrator | Saturday 20 September 2025 09:42:15 +0000 (0:00:01.796) 0:05:51.111 **** 2025-09-20 09:47:30.584585 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584590 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.584595 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-20 09:47:30.584599 | orchestrator | 2025-09-20 09:47:30.584604 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-20 09:47:30.584609 | orchestrator | Saturday 20 September 2025 09:42:15 +0000 (0:00:00.418) 0:05:51.530 **** 2025-09-20 09:47:30.584614 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-20 09:47:30.584631 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-20 09:47:30.584636 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-20 09:47:30.584641 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-20 09:47:30.584646 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.584651 | orchestrator | 2025-09-20 09:47:30.584655 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-20 09:47:30.584660 | orchestrator | Saturday 20 September 2025 09:42:40 +0000 (0:00:24.821) 0:06:16.351 **** 2025-09-20 09:47:30.584665 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.584669 | orchestrator | 2025-09-20 09:47:30.584674 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-20 09:47:30.584679 | orchestrator | Saturday 20 September 2025 09:42:41 +0000 (0:00:01.274) 0:06:17.626 **** 2025-09-20 09:47:30.584684 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.584689 | orchestrator | 2025-09-20 09:47:30.584693 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-20 09:47:30.584698 | orchestrator | Saturday 20 September 2025 09:42:41 +0000 (0:00:00.294) 0:06:17.920 **** 2025-09-20 09:47:30.584703 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.584707 | orchestrator | 2025-09-20 09:47:30.584712 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-20 09:47:30.584717 | orchestrator | Saturday 20 September 2025 09:42:42 +0000 (0:00:00.129) 0:06:18.050 **** 2025-09-20 09:47:30.584722 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-20 09:47:30.584727 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-20 09:47:30.584731 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-20 09:47:30.584736 | orchestrator | 2025-09-20 09:47:30.584741 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-20 09:47:30.584745 | orchestrator | Saturday 20 September 2025 09:42:48 +0000 (0:00:06.364) 0:06:24.415 **** 2025-09-20 09:47:30.584750 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-20 09:47:30.584755 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-20 09:47:30.584760 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-20 09:47:30.584765 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-20 09:47:30.584769 | orchestrator | 2025-09-20 09:47:30.584774 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 09:47:30.584779 | orchestrator | Saturday 20 September 2025 09:42:53 +0000 (0:00:04.626) 0:06:29.041 **** 2025-09-20 09:47:30.584784 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.584788 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.584793 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584798 | orchestrator | 2025-09-20 09:47:30.584803 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-20 09:47:30.584808 | orchestrator | Saturday 20 September 2025 09:42:54 +0000 (0:00:01.023) 0:06:30.065 **** 2025-09-20 09:47:30.584812 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.584821 | orchestrator | 2025-09-20 09:47:30.584826 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-20 09:47:30.584831 | orchestrator | Saturday 20 September 2025 09:42:54 +0000 (0:00:00.569) 0:06:30.634 **** 2025-09-20 09:47:30.584836 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.584840 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.584845 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.584850 | orchestrator | 2025-09-20 09:47:30.584855 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-20 09:47:30.584859 | orchestrator | Saturday 20 September 2025 09:42:54 +0000 (0:00:00.305) 0:06:30.940 **** 2025-09-20 09:47:30.584864 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.584869 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.584874 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.584878 | orchestrator | 2025-09-20 09:47:30.584883 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-20 09:47:30.584888 | orchestrator | Saturday 20 September 2025 09:42:56 +0000 (0:00:01.461) 0:06:32.401 **** 2025-09-20 09:47:30.584893 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-20 09:47:30.584897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-20 09:47:30.584902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-20 09:47:30.584907 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.584912 | orchestrator | 2025-09-20 09:47:30.584916 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-20 09:47:30.584921 | orchestrator | Saturday 20 September 2025 09:42:57 +0000 (0:00:00.636) 0:06:33.037 **** 2025-09-20 09:47:30.584926 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.584931 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.584970 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.584979 | orchestrator | 2025-09-20 09:47:30.584984 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-20 09:47:30.584989 | orchestrator | 2025-09-20 09:47:30.584996 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.585001 | orchestrator | Saturday 20 September 2025 09:42:57 +0000 (0:00:00.541) 0:06:33.579 **** 2025-09-20 09:47:30.585006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.585011 | orchestrator | 2025-09-20 09:47:30.585016 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.585036 | orchestrator | Saturday 20 September 2025 09:42:58 +0000 (0:00:00.730) 0:06:34.310 **** 2025-09-20 09:47:30.585042 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.585058 | orchestrator | 2025-09-20 09:47:30.585063 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.585068 | orchestrator | Saturday 20 September 2025 09:42:58 +0000 (0:00:00.523) 0:06:34.833 **** 2025-09-20 09:47:30.585073 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585077 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585082 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585087 | orchestrator | 2025-09-20 09:47:30.585092 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.585096 | orchestrator | Saturday 20 September 2025 09:42:59 +0000 (0:00:00.323) 0:06:35.156 **** 2025-09-20 09:47:30.585101 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585106 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585111 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585115 | orchestrator | 2025-09-20 09:47:30.585120 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.585125 | orchestrator | Saturday 20 September 2025 09:43:00 +0000 (0:00:00.944) 0:06:36.101 **** 2025-09-20 09:47:30.585135 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585140 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585145 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585150 | orchestrator | 2025-09-20 09:47:30.585155 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.585159 | orchestrator | Saturday 20 September 2025 09:43:00 +0000 (0:00:00.725) 0:06:36.827 **** 2025-09-20 09:47:30.585164 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585169 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585174 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585178 | orchestrator | 2025-09-20 09:47:30.585183 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.585188 | orchestrator | Saturday 20 September 2025 09:43:01 +0000 (0:00:00.747) 0:06:37.574 **** 2025-09-20 09:47:30.585193 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585197 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585202 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585207 | orchestrator | 2025-09-20 09:47:30.585212 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.585216 | orchestrator | Saturday 20 September 2025 09:43:01 +0000 (0:00:00.294) 0:06:37.869 **** 2025-09-20 09:47:30.585221 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585226 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585230 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585235 | orchestrator | 2025-09-20 09:47:30.585240 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.585245 | orchestrator | Saturday 20 September 2025 09:43:02 +0000 (0:00:00.572) 0:06:38.441 **** 2025-09-20 09:47:30.585249 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585254 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585259 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585264 | orchestrator | 2025-09-20 09:47:30.585268 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.585273 | orchestrator | Saturday 20 September 2025 09:43:02 +0000 (0:00:00.339) 0:06:38.781 **** 2025-09-20 09:47:30.585278 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585282 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585287 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585292 | orchestrator | 2025-09-20 09:47:30.585297 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.585302 | orchestrator | Saturday 20 September 2025 09:43:03 +0000 (0:00:00.679) 0:06:39.460 **** 2025-09-20 09:47:30.585306 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585311 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585316 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585320 | orchestrator | 2025-09-20 09:47:30.585325 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.585330 | orchestrator | Saturday 20 September 2025 09:43:04 +0000 (0:00:00.751) 0:06:40.212 **** 2025-09-20 09:47:30.585335 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585339 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585344 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585349 | orchestrator | 2025-09-20 09:47:30.585354 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.585358 | orchestrator | Saturday 20 September 2025 09:43:04 +0000 (0:00:00.673) 0:06:40.886 **** 2025-09-20 09:47:30.585363 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585368 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585372 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585377 | orchestrator | 2025-09-20 09:47:30.585382 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.585386 | orchestrator | Saturday 20 September 2025 09:43:05 +0000 (0:00:00.385) 0:06:41.271 **** 2025-09-20 09:47:30.585391 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585399 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585404 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585409 | orchestrator | 2025-09-20 09:47:30.585414 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.585418 | orchestrator | Saturday 20 September 2025 09:43:05 +0000 (0:00:00.392) 0:06:41.663 **** 2025-09-20 09:47:30.585423 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585428 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585433 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585437 | orchestrator | 2025-09-20 09:47:30.585445 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.585450 | orchestrator | Saturday 20 September 2025 09:43:05 +0000 (0:00:00.325) 0:06:41.988 **** 2025-09-20 09:47:30.585454 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585459 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585464 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585468 | orchestrator | 2025-09-20 09:47:30.585473 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.585478 | orchestrator | Saturday 20 September 2025 09:43:06 +0000 (0:00:00.676) 0:06:42.665 **** 2025-09-20 09:47:30.585485 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585490 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585494 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585499 | orchestrator | 2025-09-20 09:47:30.585504 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.585508 | orchestrator | Saturday 20 September 2025 09:43:06 +0000 (0:00:00.327) 0:06:42.992 **** 2025-09-20 09:47:30.585513 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585518 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585523 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585527 | orchestrator | 2025-09-20 09:47:30.585532 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.585537 | orchestrator | Saturday 20 September 2025 09:43:07 +0000 (0:00:00.327) 0:06:43.320 **** 2025-09-20 09:47:30.585541 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585546 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585551 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585555 | orchestrator | 2025-09-20 09:47:30.585560 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.585565 | orchestrator | Saturday 20 September 2025 09:43:07 +0000 (0:00:00.396) 0:06:43.717 **** 2025-09-20 09:47:30.585570 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585574 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585579 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585584 | orchestrator | 2025-09-20 09:47:30.585588 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.585593 | orchestrator | Saturday 20 September 2025 09:43:08 +0000 (0:00:00.630) 0:06:44.348 **** 2025-09-20 09:47:30.585598 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585603 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585607 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585612 | orchestrator | 2025-09-20 09:47:30.585617 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-20 09:47:30.585621 | orchestrator | Saturday 20 September 2025 09:43:08 +0000 (0:00:00.606) 0:06:44.954 **** 2025-09-20 09:47:30.585626 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585631 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585635 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585640 | orchestrator | 2025-09-20 09:47:30.585645 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-20 09:47:30.585650 | orchestrator | Saturday 20 September 2025 09:43:09 +0000 (0:00:00.317) 0:06:45.271 **** 2025-09-20 09:47:30.585654 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:47:30.585659 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:47:30.585667 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:47:30.585672 | orchestrator | 2025-09-20 09:47:30.585676 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-20 09:47:30.585681 | orchestrator | Saturday 20 September 2025 09:43:10 +0000 (0:00:00.942) 0:06:46.214 **** 2025-09-20 09:47:30.585686 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.585691 | orchestrator | 2025-09-20 09:47:30.585695 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-20 09:47:30.585700 | orchestrator | Saturday 20 September 2025 09:43:10 +0000 (0:00:00.774) 0:06:46.988 **** 2025-09-20 09:47:30.585705 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585710 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585714 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585719 | orchestrator | 2025-09-20 09:47:30.585724 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-20 09:47:30.585728 | orchestrator | Saturday 20 September 2025 09:43:11 +0000 (0:00:00.318) 0:06:47.307 **** 2025-09-20 09:47:30.585733 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585738 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585742 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585747 | orchestrator | 2025-09-20 09:47:30.585752 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-20 09:47:30.585756 | orchestrator | Saturday 20 September 2025 09:43:11 +0000 (0:00:00.300) 0:06:47.607 **** 2025-09-20 09:47:30.585761 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585766 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585771 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585775 | orchestrator | 2025-09-20 09:47:30.585780 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-20 09:47:30.585785 | orchestrator | Saturday 20 September 2025 09:43:12 +0000 (0:00:00.910) 0:06:48.518 **** 2025-09-20 09:47:30.585790 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.585794 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.585799 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.585804 | orchestrator | 2025-09-20 09:47:30.585808 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-20 09:47:30.585813 | orchestrator | Saturday 20 September 2025 09:43:12 +0000 (0:00:00.424) 0:06:48.942 **** 2025-09-20 09:47:30.585818 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-20 09:47:30.585823 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-20 09:47:30.585830 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-20 09:47:30.585834 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-20 09:47:30.585839 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-20 09:47:30.585844 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-20 09:47:30.585849 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-20 09:47:30.585856 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-20 09:47:30.585861 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-20 09:47:30.585866 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-20 09:47:30.585871 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-20 09:47:30.585875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-20 09:47:30.585880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-20 09:47:30.585888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-20 09:47:30.585893 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-20 09:47:30.585897 | orchestrator | 2025-09-20 09:47:30.585902 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-20 09:47:30.585907 | orchestrator | Saturday 20 September 2025 09:43:15 +0000 (0:00:02.959) 0:06:51.902 **** 2025-09-20 09:47:30.585911 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.585916 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.585921 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.585926 | orchestrator | 2025-09-20 09:47:30.585930 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-20 09:47:30.585935 | orchestrator | Saturday 20 September 2025 09:43:16 +0000 (0:00:00.303) 0:06:52.205 **** 2025-09-20 09:47:30.585940 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.585944 | orchestrator | 2025-09-20 09:47:30.585949 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-20 09:47:30.585954 | orchestrator | Saturday 20 September 2025 09:43:17 +0000 (0:00:00.804) 0:06:53.010 **** 2025-09-20 09:47:30.585959 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-20 09:47:30.585963 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-20 09:47:30.585968 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-20 09:47:30.585973 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-20 09:47:30.585977 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-20 09:47:30.585982 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-20 09:47:30.585987 | orchestrator | 2025-09-20 09:47:30.585991 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-20 09:47:30.585996 | orchestrator | Saturday 20 September 2025 09:43:17 +0000 (0:00:00.935) 0:06:53.946 **** 2025-09-20 09:47:30.586001 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.586006 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 09:47:30.586010 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:47:30.586039 | orchestrator | 2025-09-20 09:47:30.586070 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-20 09:47:30.586076 | orchestrator | Saturday 20 September 2025 09:43:19 +0000 (0:00:01.907) 0:06:55.854 **** 2025-09-20 09:47:30.586080 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 09:47:30.586085 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 09:47:30.586090 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.586095 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 09:47:30.586099 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-20 09:47:30.586104 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.586109 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 09:47:30.586113 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-20 09:47:30.586118 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.586123 | orchestrator | 2025-09-20 09:47:30.586128 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-20 09:47:30.586133 | orchestrator | Saturday 20 September 2025 09:43:21 +0000 (0:00:01.441) 0:06:57.295 **** 2025-09-20 09:47:30.586137 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.586142 | orchestrator | 2025-09-20 09:47:30.586147 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-20 09:47:30.586152 | orchestrator | Saturday 20 September 2025 09:43:23 +0000 (0:00:02.032) 0:06:59.328 **** 2025-09-20 09:47:30.586156 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.586165 | orchestrator | 2025-09-20 09:47:30.586169 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-20 09:47:30.586174 | orchestrator | Saturday 20 September 2025 09:43:23 +0000 (0:00:00.523) 0:06:59.851 **** 2025-09-20 09:47:30.586179 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a0e476ce-8dbb-5cb3-b205-e96c67f25126', 'data_vg': 'ceph-a0e476ce-8dbb-5cb3-b205-e96c67f25126'}) 2025-09-20 09:47:30.586187 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-0cf3001a-a2bc-51f5-b2f0-80e0839adf22', 'data_vg': 'ceph-0cf3001a-a2bc-51f5-b2f0-80e0839adf22'}) 2025-09-20 09:47:30.586192 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6319afae-7c48-5c70-87a8-62ab4a9b6a4c', 'data_vg': 'ceph-6319afae-7c48-5c70-87a8-62ab4a9b6a4c'}) 2025-09-20 09:47:30.586197 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-54d5d251-b5b9-5293-b72e-54d20a6e98e4', 'data_vg': 'ceph-54d5d251-b5b9-5293-b72e-54d20a6e98e4'}) 2025-09-20 09:47:30.586204 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f5012b99-8722-5cc3-9d11-b95ce6d4943a', 'data_vg': 'ceph-f5012b99-8722-5cc3-9d11-b95ce6d4943a'}) 2025-09-20 09:47:30.586209 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-606172b3-e8d7-56e6-aaf4-86ed1800c0e9', 'data_vg': 'ceph-606172b3-e8d7-56e6-aaf4-86ed1800c0e9'}) 2025-09-20 09:47:30.586214 | orchestrator | 2025-09-20 09:47:30.586218 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-20 09:47:30.586223 | orchestrator | Saturday 20 September 2025 09:44:06 +0000 (0:00:42.262) 0:07:42.113 **** 2025-09-20 09:47:30.586227 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586232 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586236 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586241 | orchestrator | 2025-09-20 09:47:30.586245 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-20 09:47:30.586250 | orchestrator | Saturday 20 September 2025 09:44:06 +0000 (0:00:00.595) 0:07:42.709 **** 2025-09-20 09:47:30.586254 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.586259 | orchestrator | 2025-09-20 09:47:30.586264 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-20 09:47:30.586268 | orchestrator | Saturday 20 September 2025 09:44:07 +0000 (0:00:00.585) 0:07:43.294 **** 2025-09-20 09:47:30.586273 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.586277 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.586282 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.586286 | orchestrator | 2025-09-20 09:47:30.586291 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-20 09:47:30.586295 | orchestrator | Saturday 20 September 2025 09:44:07 +0000 (0:00:00.639) 0:07:43.933 **** 2025-09-20 09:47:30.586300 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.586304 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.586309 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.586314 | orchestrator | 2025-09-20 09:47:30.586318 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-20 09:47:30.586323 | orchestrator | Saturday 20 September 2025 09:44:10 +0000 (0:00:02.918) 0:07:46.851 **** 2025-09-20 09:47:30.586327 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.586332 | orchestrator | 2025-09-20 09:47:30.586336 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-20 09:47:30.586341 | orchestrator | Saturday 20 September 2025 09:44:11 +0000 (0:00:00.544) 0:07:47.396 **** 2025-09-20 09:47:30.586345 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.586350 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.586354 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.586359 | orchestrator | 2025-09-20 09:47:30.586367 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-20 09:47:30.586371 | orchestrator | Saturday 20 September 2025 09:44:12 +0000 (0:00:01.255) 0:07:48.651 **** 2025-09-20 09:47:30.586376 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.586380 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.586385 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.586389 | orchestrator | 2025-09-20 09:47:30.586394 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-20 09:47:30.586398 | orchestrator | Saturday 20 September 2025 09:44:14 +0000 (0:00:01.403) 0:07:50.054 **** 2025-09-20 09:47:30.586403 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.586407 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.586412 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.586417 | orchestrator | 2025-09-20 09:47:30.586421 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-20 09:47:30.586426 | orchestrator | Saturday 20 September 2025 09:44:15 +0000 (0:00:01.813) 0:07:51.868 **** 2025-09-20 09:47:30.586430 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586435 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586439 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586444 | orchestrator | 2025-09-20 09:47:30.586448 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-20 09:47:30.586453 | orchestrator | Saturday 20 September 2025 09:44:16 +0000 (0:00:00.368) 0:07:52.237 **** 2025-09-20 09:47:30.586457 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586462 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586466 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586471 | orchestrator | 2025-09-20 09:47:30.586475 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-20 09:47:30.586480 | orchestrator | Saturday 20 September 2025 09:44:16 +0000 (0:00:00.343) 0:07:52.580 **** 2025-09-20 09:47:30.586484 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 09:47:30.586489 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-09-20 09:47:30.586494 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-20 09:47:30.586498 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-20 09:47:30.586503 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-20 09:47:30.586507 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-20 09:47:30.586512 | orchestrator | 2025-09-20 09:47:30.586516 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-20 09:47:30.586521 | orchestrator | Saturday 20 September 2025 09:44:17 +0000 (0:00:01.318) 0:07:53.898 **** 2025-09-20 09:47:30.586527 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-20 09:47:30.586532 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-20 09:47:30.586537 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-20 09:47:30.586541 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-20 09:47:30.586546 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-20 09:47:30.586550 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-20 09:47:30.586555 | orchestrator | 2025-09-20 09:47:30.586559 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-20 09:47:30.586564 | orchestrator | Saturday 20 September 2025 09:44:20 +0000 (0:00:02.228) 0:07:56.127 **** 2025-09-20 09:47:30.586571 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-20 09:47:30.586576 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-20 09:47:30.586580 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-09-20 09:47:30.586585 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-20 09:47:30.586589 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-20 09:47:30.586594 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-20 09:47:30.586598 | orchestrator | 2025-09-20 09:47:30.586603 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-20 09:47:30.586607 | orchestrator | Saturday 20 September 2025 09:44:24 +0000 (0:00:04.288) 0:08:00.416 **** 2025-09-20 09:47:30.586615 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586620 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586624 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.586629 | orchestrator | 2025-09-20 09:47:30.586633 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-20 09:47:30.586638 | orchestrator | Saturday 20 September 2025 09:44:27 +0000 (0:00:02.832) 0:08:03.248 **** 2025-09-20 09:47:30.586642 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586647 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586651 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-20 09:47:30.586656 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.586660 | orchestrator | 2025-09-20 09:47:30.586665 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-20 09:47:30.586669 | orchestrator | Saturday 20 September 2025 09:44:40 +0000 (0:00:13.151) 0:08:16.400 **** 2025-09-20 09:47:30.586674 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586678 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586683 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586688 | orchestrator | 2025-09-20 09:47:30.586692 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 09:47:30.586697 | orchestrator | Saturday 20 September 2025 09:44:41 +0000 (0:00:00.879) 0:08:17.279 **** 2025-09-20 09:47:30.586701 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586706 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586710 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586715 | orchestrator | 2025-09-20 09:47:30.586719 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-20 09:47:30.586724 | orchestrator | Saturday 20 September 2025 09:44:41 +0000 (0:00:00.653) 0:08:17.933 **** 2025-09-20 09:47:30.586728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.586733 | orchestrator | 2025-09-20 09:47:30.586737 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-20 09:47:30.586742 | orchestrator | Saturday 20 September 2025 09:44:42 +0000 (0:00:00.554) 0:08:18.488 **** 2025-09-20 09:47:30.586746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.586751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.586756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.586760 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586765 | orchestrator | 2025-09-20 09:47:30.586769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-20 09:47:30.586774 | orchestrator | Saturday 20 September 2025 09:44:42 +0000 (0:00:00.443) 0:08:18.931 **** 2025-09-20 09:47:30.586778 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586783 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586787 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586792 | orchestrator | 2025-09-20 09:47:30.586796 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-20 09:47:30.586801 | orchestrator | Saturday 20 September 2025 09:44:43 +0000 (0:00:00.310) 0:08:19.242 **** 2025-09-20 09:47:30.586805 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586810 | orchestrator | 2025-09-20 09:47:30.586814 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-20 09:47:30.586819 | orchestrator | Saturday 20 September 2025 09:44:43 +0000 (0:00:00.236) 0:08:19.478 **** 2025-09-20 09:47:30.586824 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586828 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586833 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586837 | orchestrator | 2025-09-20 09:47:30.586842 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-20 09:47:30.586849 | orchestrator | Saturday 20 September 2025 09:44:44 +0000 (0:00:00.586) 0:08:20.065 **** 2025-09-20 09:47:30.586853 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586858 | orchestrator | 2025-09-20 09:47:30.586863 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-20 09:47:30.586867 | orchestrator | Saturday 20 September 2025 09:44:44 +0000 (0:00:00.244) 0:08:20.310 **** 2025-09-20 09:47:30.586872 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586876 | orchestrator | 2025-09-20 09:47:30.586880 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-20 09:47:30.586885 | orchestrator | Saturday 20 September 2025 09:44:44 +0000 (0:00:00.249) 0:08:20.559 **** 2025-09-20 09:47:30.586890 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586894 | orchestrator | 2025-09-20 09:47:30.586902 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-20 09:47:30.586907 | orchestrator | Saturday 20 September 2025 09:44:44 +0000 (0:00:00.137) 0:08:20.698 **** 2025-09-20 09:47:30.586912 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586916 | orchestrator | 2025-09-20 09:47:30.586921 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-20 09:47:30.586925 | orchestrator | Saturday 20 September 2025 09:44:44 +0000 (0:00:00.230) 0:08:20.928 **** 2025-09-20 09:47:30.586930 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586934 | orchestrator | 2025-09-20 09:47:30.586939 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-20 09:47:30.586945 | orchestrator | Saturday 20 September 2025 09:44:45 +0000 (0:00:00.219) 0:08:21.148 **** 2025-09-20 09:47:30.586950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.586955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.586959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.586964 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586969 | orchestrator | 2025-09-20 09:47:30.586973 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-20 09:47:30.586978 | orchestrator | Saturday 20 September 2025 09:44:45 +0000 (0:00:00.414) 0:08:21.563 **** 2025-09-20 09:47:30.586982 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.586987 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.586991 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.586996 | orchestrator | 2025-09-20 09:47:30.587000 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-20 09:47:30.587005 | orchestrator | Saturday 20 September 2025 09:44:45 +0000 (0:00:00.289) 0:08:21.853 **** 2025-09-20 09:47:30.587009 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587014 | orchestrator | 2025-09-20 09:47:30.587018 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-20 09:47:30.587023 | orchestrator | Saturday 20 September 2025 09:44:46 +0000 (0:00:00.798) 0:08:22.651 **** 2025-09-20 09:47:30.587027 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587032 | orchestrator | 2025-09-20 09:47:30.587036 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-20 09:47:30.587041 | orchestrator | 2025-09-20 09:47:30.587060 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.587065 | orchestrator | Saturday 20 September 2025 09:44:47 +0000 (0:00:00.673) 0:08:23.324 **** 2025-09-20 09:47:30.587070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.587075 | orchestrator | 2025-09-20 09:47:30.587080 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.587084 | orchestrator | Saturday 20 September 2025 09:44:48 +0000 (0:00:01.223) 0:08:24.548 **** 2025-09-20 09:47:30.587089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.587109 | orchestrator | 2025-09-20 09:47:30.587114 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.587118 | orchestrator | Saturday 20 September 2025 09:44:49 +0000 (0:00:01.257) 0:08:25.805 **** 2025-09-20 09:47:30.587123 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587127 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587132 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587137 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587141 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587146 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587150 | orchestrator | 2025-09-20 09:47:30.587155 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.587159 | orchestrator | Saturday 20 September 2025 09:44:51 +0000 (0:00:01.274) 0:08:27.080 **** 2025-09-20 09:47:30.587164 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587168 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587173 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587178 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587182 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587187 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587191 | orchestrator | 2025-09-20 09:47:30.587196 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.587200 | orchestrator | Saturday 20 September 2025 09:44:51 +0000 (0:00:00.781) 0:08:27.861 **** 2025-09-20 09:47:30.587205 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587209 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587214 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587218 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587223 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587227 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587232 | orchestrator | 2025-09-20 09:47:30.587237 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.587241 | orchestrator | Saturday 20 September 2025 09:44:52 +0000 (0:00:00.975) 0:08:28.836 **** 2025-09-20 09:47:30.587246 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587250 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587255 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587259 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587264 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587268 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587273 | orchestrator | 2025-09-20 09:47:30.587278 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.587282 | orchestrator | Saturday 20 September 2025 09:44:53 +0000 (0:00:00.727) 0:08:29.564 **** 2025-09-20 09:47:30.587287 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587291 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587296 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587300 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587305 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587309 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587314 | orchestrator | 2025-09-20 09:47:30.587321 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.587325 | orchestrator | Saturday 20 September 2025 09:44:54 +0000 (0:00:00.973) 0:08:30.538 **** 2025-09-20 09:47:30.587330 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587334 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587339 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587343 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587348 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587352 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587357 | orchestrator | 2025-09-20 09:47:30.587361 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.587373 | orchestrator | Saturday 20 September 2025 09:44:55 +0000 (0:00:00.886) 0:08:31.425 **** 2025-09-20 09:47:30.587378 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587382 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587387 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587391 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587396 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587400 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587405 | orchestrator | 2025-09-20 09:47:30.587409 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.587414 | orchestrator | Saturday 20 September 2025 09:44:56 +0000 (0:00:00.657) 0:08:32.083 **** 2025-09-20 09:47:30.587418 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587423 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587427 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587432 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587436 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587441 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587445 | orchestrator | 2025-09-20 09:47:30.587450 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.587454 | orchestrator | Saturday 20 September 2025 09:44:57 +0000 (0:00:01.350) 0:08:33.433 **** 2025-09-20 09:47:30.587459 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587463 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587468 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587472 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587477 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587481 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587486 | orchestrator | 2025-09-20 09:47:30.587490 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.587495 | orchestrator | Saturday 20 September 2025 09:44:58 +0000 (0:00:01.046) 0:08:34.480 **** 2025-09-20 09:47:30.587499 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587504 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587508 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587513 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587517 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587522 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587526 | orchestrator | 2025-09-20 09:47:30.587531 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.587535 | orchestrator | Saturday 20 September 2025 09:44:59 +0000 (0:00:00.862) 0:08:35.342 **** 2025-09-20 09:47:30.587540 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587544 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587549 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587553 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587558 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587562 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587567 | orchestrator | 2025-09-20 09:47:30.587571 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.587576 | orchestrator | Saturday 20 September 2025 09:44:59 +0000 (0:00:00.603) 0:08:35.946 **** 2025-09-20 09:47:30.587580 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587585 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587589 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587594 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587598 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587603 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587607 | orchestrator | 2025-09-20 09:47:30.587612 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.587617 | orchestrator | Saturday 20 September 2025 09:45:00 +0000 (0:00:00.861) 0:08:36.807 **** 2025-09-20 09:47:30.587621 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587626 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587630 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587638 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587643 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587648 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587652 | orchestrator | 2025-09-20 09:47:30.587657 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.587661 | orchestrator | Saturday 20 September 2025 09:45:01 +0000 (0:00:00.621) 0:08:37.429 **** 2025-09-20 09:47:30.587666 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587670 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587675 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587679 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587683 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587688 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587692 | orchestrator | 2025-09-20 09:47:30.587697 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.587701 | orchestrator | Saturday 20 September 2025 09:45:02 +0000 (0:00:00.852) 0:08:38.282 **** 2025-09-20 09:47:30.587706 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587710 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587715 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587719 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587724 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587728 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587733 | orchestrator | 2025-09-20 09:47:30.587737 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.587742 | orchestrator | Saturday 20 September 2025 09:45:02 +0000 (0:00:00.585) 0:08:38.867 **** 2025-09-20 09:47:30.587746 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587751 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587755 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587760 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:47:30.587764 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:47:30.587771 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:47:30.587776 | orchestrator | 2025-09-20 09:47:30.587781 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.587785 | orchestrator | Saturday 20 September 2025 09:45:03 +0000 (0:00:00.945) 0:08:39.812 **** 2025-09-20 09:47:30.587790 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.587794 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.587799 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.587803 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587808 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587812 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587817 | orchestrator | 2025-09-20 09:47:30.587823 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.587828 | orchestrator | Saturday 20 September 2025 09:45:04 +0000 (0:00:00.598) 0:08:40.411 **** 2025-09-20 09:47:30.587833 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587837 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587842 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587846 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587850 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587855 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587859 | orchestrator | 2025-09-20 09:47:30.587864 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.587869 | orchestrator | Saturday 20 September 2025 09:45:05 +0000 (0:00:00.894) 0:08:41.305 **** 2025-09-20 09:47:30.587873 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.587877 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.587882 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.587886 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587891 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.587895 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.587900 | orchestrator | 2025-09-20 09:47:30.587904 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-20 09:47:30.587913 | orchestrator | Saturday 20 September 2025 09:45:06 +0000 (0:00:01.275) 0:08:42.580 **** 2025-09-20 09:47:30.587917 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.587922 | orchestrator | 2025-09-20 09:47:30.587926 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-20 09:47:30.587931 | orchestrator | Saturday 20 September 2025 09:45:10 +0000 (0:00:04.066) 0:08:46.647 **** 2025-09-20 09:47:30.587936 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.587940 | orchestrator | 2025-09-20 09:47:30.587945 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-20 09:47:30.587949 | orchestrator | Saturday 20 September 2025 09:45:12 +0000 (0:00:01.959) 0:08:48.606 **** 2025-09-20 09:47:30.587954 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.587958 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.587963 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.587967 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.587972 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.587976 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.587981 | orchestrator | 2025-09-20 09:47:30.587985 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-20 09:47:30.587990 | orchestrator | Saturday 20 September 2025 09:45:14 +0000 (0:00:01.520) 0:08:50.127 **** 2025-09-20 09:47:30.587994 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.587999 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.588003 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.588008 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.588012 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.588017 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.588021 | orchestrator | 2025-09-20 09:47:30.588026 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-20 09:47:30.588031 | orchestrator | Saturday 20 September 2025 09:45:15 +0000 (0:00:01.341) 0:08:51.468 **** 2025-09-20 09:47:30.588035 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.588040 | orchestrator | 2025-09-20 09:47:30.588056 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-20 09:47:30.588060 | orchestrator | Saturday 20 September 2025 09:45:16 +0000 (0:00:01.236) 0:08:52.704 **** 2025-09-20 09:47:30.588065 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.588069 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.588074 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.588078 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.588083 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.588087 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.588092 | orchestrator | 2025-09-20 09:47:30.588096 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-20 09:47:30.588101 | orchestrator | Saturday 20 September 2025 09:45:18 +0000 (0:00:01.557) 0:08:54.262 **** 2025-09-20 09:47:30.588105 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.588110 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.588114 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.588119 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.588123 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.588128 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.588132 | orchestrator | 2025-09-20 09:47:30.588137 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-20 09:47:30.588141 | orchestrator | Saturday 20 September 2025 09:45:21 +0000 (0:00:03.592) 0:08:57.856 **** 2025-09-20 09:47:30.588146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:47:30.588154 | orchestrator | 2025-09-20 09:47:30.588159 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-20 09:47:30.588163 | orchestrator | Saturday 20 September 2025 09:45:23 +0000 (0:00:01.212) 0:08:59.068 **** 2025-09-20 09:47:30.588168 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588172 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588177 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588181 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.588186 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.588193 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.588198 | orchestrator | 2025-09-20 09:47:30.588202 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-20 09:47:30.588207 | orchestrator | Saturday 20 September 2025 09:45:23 +0000 (0:00:00.599) 0:08:59.667 **** 2025-09-20 09:47:30.588211 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.588216 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.588220 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.588225 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:47:30.588229 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:47:30.588234 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:47:30.588238 | orchestrator | 2025-09-20 09:47:30.588245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-20 09:47:30.588250 | orchestrator | Saturday 20 September 2025 09:45:26 +0000 (0:00:02.672) 0:09:02.340 **** 2025-09-20 09:47:30.588254 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588259 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588263 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588268 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:47:30.588272 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:47:30.588277 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:47:30.588281 | orchestrator | 2025-09-20 09:47:30.588286 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-20 09:47:30.588290 | orchestrator | 2025-09-20 09:47:30.588295 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.588299 | orchestrator | Saturday 20 September 2025 09:45:27 +0000 (0:00:00.892) 0:09:03.232 **** 2025-09-20 09:47:30.588304 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.588309 | orchestrator | 2025-09-20 09:47:30.588313 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.588318 | orchestrator | Saturday 20 September 2025 09:45:28 +0000 (0:00:00.833) 0:09:04.066 **** 2025-09-20 09:47:30.588322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.588327 | orchestrator | 2025-09-20 09:47:30.588331 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.588336 | orchestrator | Saturday 20 September 2025 09:45:28 +0000 (0:00:00.523) 0:09:04.589 **** 2025-09-20 09:47:30.588340 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588345 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588349 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588354 | orchestrator | 2025-09-20 09:47:30.588358 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.588363 | orchestrator | Saturday 20 September 2025 09:45:29 +0000 (0:00:00.613) 0:09:05.203 **** 2025-09-20 09:47:30.588367 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588372 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588376 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588381 | orchestrator | 2025-09-20 09:47:30.588385 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.588390 | orchestrator | Saturday 20 September 2025 09:45:29 +0000 (0:00:00.742) 0:09:05.946 **** 2025-09-20 09:47:30.588394 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588399 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588407 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588412 | orchestrator | 2025-09-20 09:47:30.588417 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.588421 | orchestrator | Saturday 20 September 2025 09:45:30 +0000 (0:00:00.744) 0:09:06.691 **** 2025-09-20 09:47:30.588426 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588430 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588435 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588439 | orchestrator | 2025-09-20 09:47:30.588444 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.588448 | orchestrator | Saturday 20 September 2025 09:45:31 +0000 (0:00:00.748) 0:09:07.439 **** 2025-09-20 09:47:30.588453 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588457 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588462 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588466 | orchestrator | 2025-09-20 09:47:30.588471 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.588475 | orchestrator | Saturday 20 September 2025 09:45:32 +0000 (0:00:00.563) 0:09:08.002 **** 2025-09-20 09:47:30.588480 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588484 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588489 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588493 | orchestrator | 2025-09-20 09:47:30.588498 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.588502 | orchestrator | Saturday 20 September 2025 09:45:32 +0000 (0:00:00.346) 0:09:08.349 **** 2025-09-20 09:47:30.588507 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588511 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588516 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588520 | orchestrator | 2025-09-20 09:47:30.588525 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.588529 | orchestrator | Saturday 20 September 2025 09:45:32 +0000 (0:00:00.344) 0:09:08.693 **** 2025-09-20 09:47:30.588534 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588538 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588543 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588547 | orchestrator | 2025-09-20 09:47:30.588552 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.588557 | orchestrator | Saturday 20 September 2025 09:45:33 +0000 (0:00:00.765) 0:09:09.459 **** 2025-09-20 09:47:30.588561 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588566 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588570 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588575 | orchestrator | 2025-09-20 09:47:30.588579 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.588584 | orchestrator | Saturday 20 September 2025 09:45:34 +0000 (0:00:01.156) 0:09:10.616 **** 2025-09-20 09:47:30.588588 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588593 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588600 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588605 | orchestrator | 2025-09-20 09:47:30.588609 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.588614 | orchestrator | Saturday 20 September 2025 09:45:34 +0000 (0:00:00.306) 0:09:10.923 **** 2025-09-20 09:47:30.588618 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588623 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588627 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588632 | orchestrator | 2025-09-20 09:47:30.588636 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.588643 | orchestrator | Saturday 20 September 2025 09:45:35 +0000 (0:00:00.315) 0:09:11.239 **** 2025-09-20 09:47:30.588648 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588652 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588657 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588661 | orchestrator | 2025-09-20 09:47:30.588666 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.588674 | orchestrator | Saturday 20 September 2025 09:45:35 +0000 (0:00:00.389) 0:09:11.629 **** 2025-09-20 09:47:30.588678 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588683 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588687 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588692 | orchestrator | 2025-09-20 09:47:30.588696 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.588701 | orchestrator | Saturday 20 September 2025 09:45:36 +0000 (0:00:00.666) 0:09:12.295 **** 2025-09-20 09:47:30.588705 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588710 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588714 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588719 | orchestrator | 2025-09-20 09:47:30.588723 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.588728 | orchestrator | Saturday 20 September 2025 09:45:36 +0000 (0:00:00.361) 0:09:12.656 **** 2025-09-20 09:47:30.588732 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588737 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588741 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588746 | orchestrator | 2025-09-20 09:47:30.588750 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.588755 | orchestrator | Saturday 20 September 2025 09:45:37 +0000 (0:00:00.396) 0:09:13.052 **** 2025-09-20 09:47:30.588759 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588764 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588768 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588773 | orchestrator | 2025-09-20 09:47:30.588777 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.588782 | orchestrator | Saturday 20 September 2025 09:45:37 +0000 (0:00:00.351) 0:09:13.404 **** 2025-09-20 09:47:30.588787 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588791 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588796 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588800 | orchestrator | 2025-09-20 09:47:30.588805 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.588809 | orchestrator | Saturday 20 September 2025 09:45:38 +0000 (0:00:00.607) 0:09:14.011 **** 2025-09-20 09:47:30.588814 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588818 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588823 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588827 | orchestrator | 2025-09-20 09:47:30.588832 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.588836 | orchestrator | Saturday 20 September 2025 09:45:38 +0000 (0:00:00.315) 0:09:14.327 **** 2025-09-20 09:47:30.588841 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.588845 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.588850 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.588854 | orchestrator | 2025-09-20 09:47:30.588859 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-20 09:47:30.588863 | orchestrator | Saturday 20 September 2025 09:45:39 +0000 (0:00:00.715) 0:09:15.043 **** 2025-09-20 09:47:30.588868 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.588873 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.588877 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-20 09:47:30.588881 | orchestrator | 2025-09-20 09:47:30.588886 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-20 09:47:30.588890 | orchestrator | Saturday 20 September 2025 09:45:39 +0000 (0:00:00.725) 0:09:15.769 **** 2025-09-20 09:47:30.588895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.588899 | orchestrator | 2025-09-20 09:47:30.588904 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-20 09:47:30.588909 | orchestrator | Saturday 20 September 2025 09:45:41 +0000 (0:00:02.102) 0:09:17.872 **** 2025-09-20 09:47:30.588917 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-20 09:47:30.588923 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.588927 | orchestrator | 2025-09-20 09:47:30.588932 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-20 09:47:30.588936 | orchestrator | Saturday 20 September 2025 09:45:42 +0000 (0:00:00.215) 0:09:18.087 **** 2025-09-20 09:47:30.588942 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:47:30.588953 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:47:30.588958 | orchestrator | 2025-09-20 09:47:30.588962 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-20 09:47:30.588967 | orchestrator | Saturday 20 September 2025 09:45:50 +0000 (0:00:08.326) 0:09:26.414 **** 2025-09-20 09:47:30.588971 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 09:47:30.588976 | orchestrator | 2025-09-20 09:47:30.588980 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-20 09:47:30.588987 | orchestrator | Saturday 20 September 2025 09:45:54 +0000 (0:00:03.943) 0:09:30.357 **** 2025-09-20 09:47:30.588992 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.588996 | orchestrator | 2025-09-20 09:47:30.589001 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-20 09:47:30.589005 | orchestrator | Saturday 20 September 2025 09:45:55 +0000 (0:00:00.920) 0:09:31.278 **** 2025-09-20 09:47:30.589010 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-20 09:47:30.589014 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-20 09:47:30.589019 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-20 09:47:30.589023 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-20 09:47:30.589028 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-20 09:47:30.589032 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-20 09:47:30.589037 | orchestrator | 2025-09-20 09:47:30.589041 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-20 09:47:30.589069 | orchestrator | Saturday 20 September 2025 09:45:56 +0000 (0:00:01.212) 0:09:32.491 **** 2025-09-20 09:47:30.589074 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.589078 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 09:47:30.589083 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:47:30.589087 | orchestrator | 2025-09-20 09:47:30.589092 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-20 09:47:30.589096 | orchestrator | Saturday 20 September 2025 09:45:59 +0000 (0:00:02.646) 0:09:35.137 **** 2025-09-20 09:47:30.589100 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 09:47:30.589105 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 09:47:30.589109 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589114 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 09:47:30.589118 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-20 09:47:30.589123 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589131 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 09:47:30.589135 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-20 09:47:30.589140 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589144 | orchestrator | 2025-09-20 09:47:30.589149 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-20 09:47:30.589153 | orchestrator | Saturday 20 September 2025 09:46:00 +0000 (0:00:01.278) 0:09:36.416 **** 2025-09-20 09:47:30.589158 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589162 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589167 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589171 | orchestrator | 2025-09-20 09:47:30.589176 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-20 09:47:30.589180 | orchestrator | Saturday 20 September 2025 09:46:03 +0000 (0:00:02.613) 0:09:39.030 **** 2025-09-20 09:47:30.589185 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589189 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589194 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589198 | orchestrator | 2025-09-20 09:47:30.589202 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-20 09:47:30.589207 | orchestrator | Saturday 20 September 2025 09:46:03 +0000 (0:00:00.467) 0:09:39.497 **** 2025-09-20 09:47:30.589211 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.589216 | orchestrator | 2025-09-20 09:47:30.589221 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-20 09:47:30.589225 | orchestrator | Saturday 20 September 2025 09:46:03 +0000 (0:00:00.490) 0:09:39.988 **** 2025-09-20 09:47:30.589230 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.589234 | orchestrator | 2025-09-20 09:47:30.589239 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-20 09:47:30.589243 | orchestrator | Saturday 20 September 2025 09:46:04 +0000 (0:00:00.641) 0:09:40.630 **** 2025-09-20 09:47:30.589247 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589252 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589257 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589261 | orchestrator | 2025-09-20 09:47:30.589266 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-20 09:47:30.589270 | orchestrator | Saturday 20 September 2025 09:46:05 +0000 (0:00:01.264) 0:09:41.894 **** 2025-09-20 09:47:30.589275 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589279 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589284 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589288 | orchestrator | 2025-09-20 09:47:30.589292 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-20 09:47:30.589297 | orchestrator | Saturday 20 September 2025 09:46:07 +0000 (0:00:01.130) 0:09:43.025 **** 2025-09-20 09:47:30.589301 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589306 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589310 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589315 | orchestrator | 2025-09-20 09:47:30.589331 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-20 09:47:30.589337 | orchestrator | Saturday 20 September 2025 09:46:08 +0000 (0:00:01.719) 0:09:44.745 **** 2025-09-20 09:47:30.589341 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589346 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589350 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589355 | orchestrator | 2025-09-20 09:47:30.589359 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-20 09:47:30.589364 | orchestrator | Saturday 20 September 2025 09:46:11 +0000 (0:00:02.316) 0:09:47.062 **** 2025-09-20 09:47:30.589371 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589375 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589384 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589388 | orchestrator | 2025-09-20 09:47:30.589393 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 09:47:30.589397 | orchestrator | Saturday 20 September 2025 09:46:12 +0000 (0:00:01.232) 0:09:48.294 **** 2025-09-20 09:47:30.589402 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589406 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589411 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589415 | orchestrator | 2025-09-20 09:47:30.589420 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-20 09:47:30.589425 | orchestrator | Saturday 20 September 2025 09:46:13 +0000 (0:00:00.989) 0:09:49.284 **** 2025-09-20 09:47:30.589429 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.589434 | orchestrator | 2025-09-20 09:47:30.589438 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-20 09:47:30.589443 | orchestrator | Saturday 20 September 2025 09:46:13 +0000 (0:00:00.525) 0:09:49.810 **** 2025-09-20 09:47:30.589447 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589452 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589456 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589461 | orchestrator | 2025-09-20 09:47:30.589465 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-20 09:47:30.589470 | orchestrator | Saturday 20 September 2025 09:46:14 +0000 (0:00:00.317) 0:09:50.127 **** 2025-09-20 09:47:30.589474 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.589479 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.589483 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.589488 | orchestrator | 2025-09-20 09:47:30.589492 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-20 09:47:30.589497 | orchestrator | Saturday 20 September 2025 09:46:15 +0000 (0:00:01.550) 0:09:51.678 **** 2025-09-20 09:47:30.589501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.589506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.589510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.589515 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589519 | orchestrator | 2025-09-20 09:47:30.589524 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-20 09:47:30.589529 | orchestrator | Saturday 20 September 2025 09:46:16 +0000 (0:00:00.624) 0:09:52.302 **** 2025-09-20 09:47:30.589533 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589537 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589542 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589546 | orchestrator | 2025-09-20 09:47:30.589550 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-20 09:47:30.589554 | orchestrator | 2025-09-20 09:47:30.589558 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-20 09:47:30.589562 | orchestrator | Saturday 20 September 2025 09:46:16 +0000 (0:00:00.592) 0:09:52.895 **** 2025-09-20 09:47:30.589566 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.589570 | orchestrator | 2025-09-20 09:47:30.589574 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-20 09:47:30.589578 | orchestrator | Saturday 20 September 2025 09:46:17 +0000 (0:00:00.709) 0:09:53.604 **** 2025-09-20 09:47:30.589583 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.589587 | orchestrator | 2025-09-20 09:47:30.589591 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-20 09:47:30.589595 | orchestrator | Saturday 20 September 2025 09:46:18 +0000 (0:00:00.591) 0:09:54.196 **** 2025-09-20 09:47:30.589599 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589606 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589610 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589614 | orchestrator | 2025-09-20 09:47:30.589618 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-20 09:47:30.589622 | orchestrator | Saturday 20 September 2025 09:46:18 +0000 (0:00:00.717) 0:09:54.914 **** 2025-09-20 09:47:30.589626 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589631 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589635 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589639 | orchestrator | 2025-09-20 09:47:30.589643 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-20 09:47:30.589647 | orchestrator | Saturday 20 September 2025 09:46:19 +0000 (0:00:00.766) 0:09:55.680 **** 2025-09-20 09:47:30.589651 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589655 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589659 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589663 | orchestrator | 2025-09-20 09:47:30.589667 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-20 09:47:30.589671 | orchestrator | Saturday 20 September 2025 09:46:20 +0000 (0:00:00.736) 0:09:56.417 **** 2025-09-20 09:47:30.589675 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589680 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589684 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589688 | orchestrator | 2025-09-20 09:47:30.589692 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-20 09:47:30.589698 | orchestrator | Saturday 20 September 2025 09:46:21 +0000 (0:00:00.712) 0:09:57.130 **** 2025-09-20 09:47:30.589702 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589706 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589710 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589714 | orchestrator | 2025-09-20 09:47:30.589718 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-20 09:47:30.589723 | orchestrator | Saturday 20 September 2025 09:46:21 +0000 (0:00:00.589) 0:09:57.719 **** 2025-09-20 09:47:30.589727 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589731 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589737 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589741 | orchestrator | 2025-09-20 09:47:30.589745 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-20 09:47:30.589750 | orchestrator | Saturday 20 September 2025 09:46:22 +0000 (0:00:00.328) 0:09:58.048 **** 2025-09-20 09:47:30.589754 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589758 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589762 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589766 | orchestrator | 2025-09-20 09:47:30.589770 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-20 09:47:30.589774 | orchestrator | Saturday 20 September 2025 09:46:22 +0000 (0:00:00.287) 0:09:58.336 **** 2025-09-20 09:47:30.589778 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589782 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589786 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589791 | orchestrator | 2025-09-20 09:47:30.589795 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-20 09:47:30.589799 | orchestrator | Saturday 20 September 2025 09:46:23 +0000 (0:00:00.710) 0:09:59.047 **** 2025-09-20 09:47:30.589803 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589807 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589811 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589815 | orchestrator | 2025-09-20 09:47:30.589819 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-20 09:47:30.589823 | orchestrator | Saturday 20 September 2025 09:46:24 +0000 (0:00:00.977) 0:10:00.025 **** 2025-09-20 09:47:30.589827 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589831 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589836 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589853 | orchestrator | 2025-09-20 09:47:30.589857 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-20 09:47:30.589861 | orchestrator | Saturday 20 September 2025 09:46:24 +0000 (0:00:00.322) 0:10:00.347 **** 2025-09-20 09:47:30.589865 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589869 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589873 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589877 | orchestrator | 2025-09-20 09:47:30.589881 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-20 09:47:30.589885 | orchestrator | Saturday 20 September 2025 09:46:24 +0000 (0:00:00.299) 0:10:00.647 **** 2025-09-20 09:47:30.589890 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589894 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589898 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589902 | orchestrator | 2025-09-20 09:47:30.589906 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-20 09:47:30.589910 | orchestrator | Saturday 20 September 2025 09:46:24 +0000 (0:00:00.331) 0:10:00.978 **** 2025-09-20 09:47:30.589914 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589918 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589922 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589926 | orchestrator | 2025-09-20 09:47:30.589931 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-20 09:47:30.589935 | orchestrator | Saturday 20 September 2025 09:46:25 +0000 (0:00:00.632) 0:10:01.611 **** 2025-09-20 09:47:30.589939 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.589943 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.589947 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.589951 | orchestrator | 2025-09-20 09:47:30.589955 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-20 09:47:30.589959 | orchestrator | Saturday 20 September 2025 09:46:25 +0000 (0:00:00.338) 0:10:01.950 **** 2025-09-20 09:47:30.589964 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589968 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589972 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.589976 | orchestrator | 2025-09-20 09:47:30.589980 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-20 09:47:30.589984 | orchestrator | Saturday 20 September 2025 09:46:26 +0000 (0:00:00.361) 0:10:02.312 **** 2025-09-20 09:47:30.589988 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.589992 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.589996 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590001 | orchestrator | 2025-09-20 09:47:30.590005 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-20 09:47:30.590009 | orchestrator | Saturday 20 September 2025 09:46:26 +0000 (0:00:00.311) 0:10:02.623 **** 2025-09-20 09:47:30.590026 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590031 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.590035 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590039 | orchestrator | 2025-09-20 09:47:30.590053 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-20 09:47:30.590057 | orchestrator | Saturday 20 September 2025 09:46:27 +0000 (0:00:00.565) 0:10:03.189 **** 2025-09-20 09:47:30.590061 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.590065 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.590070 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.590074 | orchestrator | 2025-09-20 09:47:30.590078 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-20 09:47:30.590082 | orchestrator | Saturday 20 September 2025 09:46:27 +0000 (0:00:00.340) 0:10:03.530 **** 2025-09-20 09:47:30.590086 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.590090 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.590094 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.590098 | orchestrator | 2025-09-20 09:47:30.590102 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-20 09:47:30.590116 | orchestrator | Saturday 20 September 2025 09:46:28 +0000 (0:00:00.550) 0:10:04.080 **** 2025-09-20 09:47:30.590123 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.590127 | orchestrator | 2025-09-20 09:47:30.590131 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-20 09:47:30.590135 | orchestrator | Saturday 20 September 2025 09:46:28 +0000 (0:00:00.791) 0:10:04.872 **** 2025-09-20 09:47:30.590140 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590144 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 09:47:30.590150 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:47:30.590155 | orchestrator | 2025-09-20 09:47:30.590159 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-20 09:47:30.590163 | orchestrator | Saturday 20 September 2025 09:46:30 +0000 (0:00:02.108) 0:10:06.980 **** 2025-09-20 09:47:30.590167 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 09:47:30.590171 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-20 09:47:30.590175 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.590179 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 09:47:30.590183 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-20 09:47:30.590187 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.590191 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 09:47:30.590195 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-20 09:47:30.590199 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.590204 | orchestrator | 2025-09-20 09:47:30.590208 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-20 09:47:30.590212 | orchestrator | Saturday 20 September 2025 09:46:32 +0000 (0:00:01.203) 0:10:08.184 **** 2025-09-20 09:47:30.590216 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590220 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.590224 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590228 | orchestrator | 2025-09-20 09:47:30.590232 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-20 09:47:30.590236 | orchestrator | Saturday 20 September 2025 09:46:32 +0000 (0:00:00.269) 0:10:08.454 **** 2025-09-20 09:47:30.590240 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.590245 | orchestrator | 2025-09-20 09:47:30.590249 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-20 09:47:30.590253 | orchestrator | Saturday 20 September 2025 09:46:33 +0000 (0:00:00.617) 0:10:09.072 **** 2025-09-20 09:47:30.590257 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.590261 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.590265 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.590269 | orchestrator | 2025-09-20 09:47:30.590274 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-20 09:47:30.590278 | orchestrator | Saturday 20 September 2025 09:46:33 +0000 (0:00:00.708) 0:10:09.780 **** 2025-09-20 09:47:30.590282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590286 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-20 09:47:30.590290 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590297 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-20 09:47:30.590302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590306 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-20 09:47:30.590310 | orchestrator | 2025-09-20 09:47:30.590314 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-20 09:47:30.590318 | orchestrator | Saturday 20 September 2025 09:46:38 +0000 (0:00:04.429) 0:10:14.210 **** 2025-09-20 09:47:30.590322 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590326 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:47:30.590330 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590334 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:47:30.590338 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:47:30.590342 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:47:30.590346 | orchestrator | 2025-09-20 09:47:30.590351 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-20 09:47:30.590355 | orchestrator | Saturday 20 September 2025 09:46:41 +0000 (0:00:02.928) 0:10:17.138 **** 2025-09-20 09:47:30.590359 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 09:47:30.590363 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.590367 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 09:47:30.590371 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.590377 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 09:47:30.590382 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.590386 | orchestrator | 2025-09-20 09:47:30.590390 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-20 09:47:30.590394 | orchestrator | Saturday 20 September 2025 09:46:42 +0000 (0:00:01.257) 0:10:18.396 **** 2025-09-20 09:47:30.590398 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-20 09:47:30.590402 | orchestrator | 2025-09-20 09:47:30.590406 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-20 09:47:30.590412 | orchestrator | Saturday 20 September 2025 09:46:42 +0000 (0:00:00.247) 0:10:18.643 **** 2025-09-20 09:47:30.590416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590437 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590441 | orchestrator | 2025-09-20 09:47:30.590446 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-20 09:47:30.590450 | orchestrator | Saturday 20 September 2025 09:46:43 +0000 (0:00:00.577) 0:10:19.221 **** 2025-09-20 09:47:30.590454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-20 09:47:30.590485 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590489 | orchestrator | 2025-09-20 09:47:30.590493 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-20 09:47:30.590497 | orchestrator | Saturday 20 September 2025 09:46:43 +0000 (0:00:00.624) 0:10:19.845 **** 2025-09-20 09:47:30.590502 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 09:47:30.590506 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 09:47:30.590510 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 09:47:30.590514 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 09:47:30.590518 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-20 09:47:30.590522 | orchestrator | 2025-09-20 09:47:30.590527 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-20 09:47:30.590531 | orchestrator | Saturday 20 September 2025 09:47:14 +0000 (0:00:31.064) 0:10:50.909 **** 2025-09-20 09:47:30.590535 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590539 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.590543 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590547 | orchestrator | 2025-09-20 09:47:30.590551 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-20 09:47:30.590555 | orchestrator | Saturday 20 September 2025 09:47:15 +0000 (0:00:00.279) 0:10:51.189 **** 2025-09-20 09:47:30.590559 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590563 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.590567 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590572 | orchestrator | 2025-09-20 09:47:30.590576 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-20 09:47:30.590580 | orchestrator | Saturday 20 September 2025 09:47:15 +0000 (0:00:00.470) 0:10:51.659 **** 2025-09-20 09:47:30.590584 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.590588 | orchestrator | 2025-09-20 09:47:30.590592 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-20 09:47:30.590598 | orchestrator | Saturday 20 September 2025 09:47:16 +0000 (0:00:00.506) 0:10:52.165 **** 2025-09-20 09:47:30.590603 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.590607 | orchestrator | 2025-09-20 09:47:30.590611 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-20 09:47:30.590615 | orchestrator | Saturday 20 September 2025 09:47:16 +0000 (0:00:00.567) 0:10:52.733 **** 2025-09-20 09:47:30.590619 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.590623 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.590627 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.590632 | orchestrator | 2025-09-20 09:47:30.590637 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-20 09:47:30.590645 | orchestrator | Saturday 20 September 2025 09:47:17 +0000 (0:00:01.178) 0:10:53.912 **** 2025-09-20 09:47:30.590649 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.590653 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.590657 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.590661 | orchestrator | 2025-09-20 09:47:30.590665 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-20 09:47:30.590669 | orchestrator | Saturday 20 September 2025 09:47:19 +0000 (0:00:01.134) 0:10:55.046 **** 2025-09-20 09:47:30.590673 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:47:30.590677 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:47:30.590682 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:47:30.590686 | orchestrator | 2025-09-20 09:47:30.590690 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-20 09:47:30.590694 | orchestrator | Saturday 20 September 2025 09:47:20 +0000 (0:00:01.926) 0:10:56.973 **** 2025-09-20 09:47:30.590698 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.590702 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.590706 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-20 09:47:30.590711 | orchestrator | 2025-09-20 09:47:30.590715 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-20 09:47:30.590719 | orchestrator | Saturday 20 September 2025 09:47:23 +0000 (0:00:02.613) 0:10:59.586 **** 2025-09-20 09:47:30.590723 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590727 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.590731 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590735 | orchestrator | 2025-09-20 09:47:30.590739 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-20 09:47:30.590743 | orchestrator | Saturday 20 September 2025 09:47:23 +0000 (0:00:00.330) 0:10:59.917 **** 2025-09-20 09:47:30.590748 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:47:30.590752 | orchestrator | 2025-09-20 09:47:30.590756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-20 09:47:30.590760 | orchestrator | Saturday 20 September 2025 09:47:24 +0000 (0:00:00.813) 0:11:00.731 **** 2025-09-20 09:47:30.590764 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.590768 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.590772 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.590776 | orchestrator | 2025-09-20 09:47:30.590780 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-20 09:47:30.590784 | orchestrator | Saturday 20 September 2025 09:47:25 +0000 (0:00:00.325) 0:11:01.057 **** 2025-09-20 09:47:30.590788 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590793 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:47:30.590797 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:47:30.590801 | orchestrator | 2025-09-20 09:47:30.590805 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-20 09:47:30.590809 | orchestrator | Saturday 20 September 2025 09:47:25 +0000 (0:00:00.348) 0:11:01.405 **** 2025-09-20 09:47:30.590813 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:47:30.590817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:47:30.590821 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:47:30.590825 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:47:30.590829 | orchestrator | 2025-09-20 09:47:30.590833 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-20 09:47:30.590837 | orchestrator | Saturday 20 September 2025 09:47:26 +0000 (0:00:01.317) 0:11:02.722 **** 2025-09-20 09:47:30.590844 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:47:30.590849 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:47:30.590853 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:47:30.590857 | orchestrator | 2025-09-20 09:47:30.590861 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:47:30.590865 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-20 09:47:30.590869 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-20 09:47:30.590873 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-20 09:47:30.590878 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-20 09:47:30.590884 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-20 09:47:30.590888 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-20 09:47:30.590892 | orchestrator | 2025-09-20 09:47:30.590896 | orchestrator | 2025-09-20 09:47:30.590900 | orchestrator | 2025-09-20 09:47:30.590905 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:47:30.590909 | orchestrator | Saturday 20 September 2025 09:47:26 +0000 (0:00:00.272) 0:11:02.995 **** 2025-09-20 09:47:30.590915 | orchestrator | =============================================================================== 2025-09-20 09:47:30.590919 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 50.64s 2025-09-20 09:47:30.590923 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.26s 2025-09-20 09:47:30.590927 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.06s 2025-09-20 09:47:30.590932 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.82s 2025-09-20 09:47:30.590936 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.95s 2025-09-20 09:47:30.590940 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.07s 2025-09-20 09:47:30.590944 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.15s 2025-09-20 09:47:30.590948 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.62s 2025-09-20 09:47:30.590952 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.53s 2025-09-20 09:47:30.590956 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.33s 2025-09-20 09:47:30.590960 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.23s 2025-09-20 09:47:30.590964 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.36s 2025-09-20 09:47:30.590968 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.63s 2025-09-20 09:47:30.590972 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.43s 2025-09-20 09:47:30.590976 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.29s 2025-09-20 09:47:30.590980 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.07s 2025-09-20 09:47:30.590984 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.94s 2025-09-20 09:47:30.590988 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.75s 2025-09-20 09:47:30.590992 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.59s 2025-09-20 09:47:30.590996 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.50s 2025-09-20 09:47:30.591003 | orchestrator | 2025-09-20 09:47:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:33.610744 | orchestrator | 2025-09-20 09:47:33 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:33.612019 | orchestrator | 2025-09-20 09:47:33 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:33.613539 | orchestrator | 2025-09-20 09:47:33 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:33.613983 | orchestrator | 2025-09-20 09:47:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:36.659012 | orchestrator | 2025-09-20 09:47:36 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:36.659174 | orchestrator | 2025-09-20 09:47:36 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:36.659817 | orchestrator | 2025-09-20 09:47:36 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:36.660045 | orchestrator | 2025-09-20 09:47:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:39.703873 | orchestrator | 2025-09-20 09:47:39 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:39.705705 | orchestrator | 2025-09-20 09:47:39 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:39.707421 | orchestrator | 2025-09-20 09:47:39 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:39.707459 | orchestrator | 2025-09-20 09:47:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:42.757482 | orchestrator | 2025-09-20 09:47:42 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:42.760211 | orchestrator | 2025-09-20 09:47:42 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:42.761727 | orchestrator | 2025-09-20 09:47:42 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:42.761753 | orchestrator | 2025-09-20 09:47:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:45.812470 | orchestrator | 2025-09-20 09:47:45 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:45.815781 | orchestrator | 2025-09-20 09:47:45 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:45.818734 | orchestrator | 2025-09-20 09:47:45 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:45.819411 | orchestrator | 2025-09-20 09:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:48.858317 | orchestrator | 2025-09-20 09:47:48 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:48.859493 | orchestrator | 2025-09-20 09:47:48 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:48.860797 | orchestrator | 2025-09-20 09:47:48 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:48.860908 | orchestrator | 2025-09-20 09:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:51.904582 | orchestrator | 2025-09-20 09:47:51 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:51.906480 | orchestrator | 2025-09-20 09:47:51 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:51.908179 | orchestrator | 2025-09-20 09:47:51 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:51.908200 | orchestrator | 2025-09-20 09:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:54.944892 | orchestrator | 2025-09-20 09:47:54 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:54.946638 | orchestrator | 2025-09-20 09:47:54 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:54.948517 | orchestrator | 2025-09-20 09:47:54 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:54.949477 | orchestrator | 2025-09-20 09:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:47:58.017574 | orchestrator | 2025-09-20 09:47:58 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:47:58.027328 | orchestrator | 2025-09-20 09:47:58 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:47:58.029576 | orchestrator | 2025-09-20 09:47:58 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:47:58.029599 | orchestrator | 2025-09-20 09:47:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:01.085220 | orchestrator | 2025-09-20 09:48:01 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:01.086459 | orchestrator | 2025-09-20 09:48:01 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:01.087938 | orchestrator | 2025-09-20 09:48:01 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:48:01.088151 | orchestrator | 2025-09-20 09:48:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:04.130959 | orchestrator | 2025-09-20 09:48:04 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:04.132336 | orchestrator | 2025-09-20 09:48:04 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:04.134248 | orchestrator | 2025-09-20 09:48:04 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:48:04.134694 | orchestrator | 2025-09-20 09:48:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:07.175426 | orchestrator | 2025-09-20 09:48:07 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:07.176401 | orchestrator | 2025-09-20 09:48:07 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:07.178822 | orchestrator | 2025-09-20 09:48:07 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:48:07.178847 | orchestrator | 2025-09-20 09:48:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:10.221137 | orchestrator | 2025-09-20 09:48:10 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:10.221768 | orchestrator | 2025-09-20 09:48:10 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:10.222849 | orchestrator | 2025-09-20 09:48:10 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:48:10.222873 | orchestrator | 2025-09-20 09:48:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:13.265945 | orchestrator | 2025-09-20 09:48:13 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:13.267686 | orchestrator | 2025-09-20 09:48:13 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:13.269711 | orchestrator | 2025-09-20 09:48:13 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:48:13.269744 | orchestrator | 2025-09-20 09:48:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:16.308321 | orchestrator | 2025-09-20 09:48:16 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:16.310398 | orchestrator | 2025-09-20 09:48:16 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:16.311878 | orchestrator | 2025-09-20 09:48:16 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state STARTED 2025-09-20 09:48:16.311913 | orchestrator | 2025-09-20 09:48:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:19.360587 | orchestrator | 2025-09-20 09:48:19 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:19.363176 | orchestrator | 2025-09-20 09:48:19 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:19.366134 | orchestrator | 2025-09-20 09:48:19 | INFO  | Task 880467a2-80b8-4371-abb8-0ca4d45d13b2 is in state SUCCESS 2025-09-20 09:48:19.368389 | orchestrator | 2025-09-20 09:48:19.368420 | orchestrator | 2025-09-20 09:48:19.368433 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:48:19.368444 | orchestrator | 2025-09-20 09:48:19.368456 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:48:19.368467 | orchestrator | Saturday 20 September 2025 09:45:22 +0000 (0:00:00.258) 0:00:00.258 **** 2025-09-20 09:48:19.368478 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:19.368490 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:19.368501 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:19.368512 | orchestrator | 2025-09-20 09:48:19.368523 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:48:19.368534 | orchestrator | Saturday 20 September 2025 09:45:22 +0000 (0:00:00.274) 0:00:00.533 **** 2025-09-20 09:48:19.368546 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-20 09:48:19.368557 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-20 09:48:19.368568 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-20 09:48:19.368578 | orchestrator | 2025-09-20 09:48:19.368589 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-20 09:48:19.368600 | orchestrator | 2025-09-20 09:48:19.368611 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 09:48:19.368621 | orchestrator | Saturday 20 September 2025 09:45:22 +0000 (0:00:00.460) 0:00:00.994 **** 2025-09-20 09:48:19.368632 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:48:19.368643 | orchestrator | 2025-09-20 09:48:19.368654 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-20 09:48:19.368665 | orchestrator | Saturday 20 September 2025 09:45:23 +0000 (0:00:00.518) 0:00:01.512 **** 2025-09-20 09:48:19.368676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 09:48:19.368686 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 09:48:19.368697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-20 09:48:19.368707 | orchestrator | 2025-09-20 09:48:19.368718 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-20 09:48:19.368892 | orchestrator | Saturday 20 September 2025 09:45:23 +0000 (0:00:00.653) 0:00:02.165 **** 2025-09-20 09:48:19.368911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.368969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.368993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369061 | orchestrator | 2025-09-20 09:48:19.369072 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 09:48:19.369111 | orchestrator | Saturday 20 September 2025 09:45:25 +0000 (0:00:01.878) 0:00:04.044 **** 2025-09-20 09:48:19.369123 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:48:19.369135 | orchestrator | 2025-09-20 09:48:19.369146 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-20 09:48:19.369157 | orchestrator | Saturday 20 September 2025 09:45:26 +0000 (0:00:00.593) 0:00:04.638 **** 2025-09-20 09:48:19.369178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369271 | orchestrator | 2025-09-20 09:48:19.369283 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-20 09:48:19.369294 | orchestrator | Saturday 20 September 2025 09:45:29 +0000 (0:00:03.037) 0:00:07.675 **** 2025-09-20 09:48:19.369305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:48:19.369376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:48:19.369391 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:19.369402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:48:19.369423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:48:19.369435 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:19.369447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:48:19.369471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:48:19.369483 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:19.369494 | orchestrator | 2025-09-20 09:48:19.369505 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-20 09:48:19.369518 | orchestrator | Saturday 20 September 2025 09:45:30 +0000 (0:00:01.004) 0:00:08.679 **** 2025-09-20 09:48:19.369530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:48:19.369552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:48:19.369566 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:19.369579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:48:19.369602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:48:19.369620 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:19.369634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-20 09:48:19.369656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-20 09:48:19.369670 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:19.369682 | orchestrator | 2025-09-20 09:48:19.369695 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-20 09:48:19.369708 | orchestrator | Saturday 20 September 2025 09:45:31 +0000 (0:00:01.239) 0:00:09.918 **** 2025-09-20 09:48:19.369727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.369779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.369838 | orchestrator | 2025-09-20 09:48:19.369850 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-20 09:48:19.369863 | orchestrator | Saturday 20 September 2025 09:45:34 +0000 (0:00:02.458) 0:00:12.377 **** 2025-09-20 09:48:19.369875 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:19.369885 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:19.369896 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:19.369907 | orchestrator | 2025-09-20 09:48:19.369917 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-20 09:48:19.369933 | orchestrator | Saturday 20 September 2025 09:45:37 +0000 (0:00:03.038) 0:00:15.416 **** 2025-09-20 09:48:19.369944 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:19.369955 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:19.369965 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:19.369976 | orchestrator | 2025-09-20 09:48:19.369986 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-20 09:48:19.369997 | orchestrator | Saturday 20 September 2025 09:45:39 +0000 (0:00:02.258) 0:00:17.675 **** 2025-09-20 09:48:19.370008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.370071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.370115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-20 09:48:19.370128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.370163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.370185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-20 09:48:19.370204 | orchestrator | 2025-09-20 09:48:19.370215 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 09:48:19.370226 | orchestrator | Saturday 20 September 2025 09:45:41 +0000 (0:00:02.172) 0:00:19.847 **** 2025-09-20 09:48:19.370237 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:19.370248 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:19.370258 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:19.370269 | orchestrator | 2025-09-20 09:48:19.370279 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-20 09:48:19.370290 | orchestrator | Saturday 20 September 2025 09:45:41 +0000 (0:00:00.317) 0:00:20.165 **** 2025-09-20 09:48:19.370301 | orchestrator | 2025-09-20 09:48:19.370311 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-20 09:48:19.370322 | orchestrator | Saturday 20 September 2025 09:45:42 +0000 (0:00:00.067) 0:00:20.232 **** 2025-09-20 09:48:19.370332 | orchestrator | 2025-09-20 09:48:19.370343 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-20 09:48:19.370354 | orchestrator | Saturday 20 September 2025 09:45:42 +0000 (0:00:00.059) 0:00:20.292 **** 2025-09-20 09:48:19.370364 | orchestrator | 2025-09-20 09:48:19.370375 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-20 09:48:19.370386 | orchestrator | Saturday 20 September 2025 09:45:42 +0000 (0:00:00.068) 0:00:20.360 **** 2025-09-20 09:48:19.370396 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:19.370407 | orchestrator | 2025-09-20 09:48:19.370418 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-20 09:48:19.370429 | orchestrator | Saturday 20 September 2025 09:45:42 +0000 (0:00:00.209) 0:00:20.570 **** 2025-09-20 09:48:19.370439 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:19.370450 | orchestrator | 2025-09-20 09:48:19.370461 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-20 09:48:19.370471 | orchestrator | Saturday 20 September 2025 09:45:43 +0000 (0:00:00.620) 0:00:21.191 **** 2025-09-20 09:48:19.370482 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:19.370493 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:19.370503 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:19.370514 | orchestrator | 2025-09-20 09:48:19.370525 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-20 09:48:19.370535 | orchestrator | Saturday 20 September 2025 09:46:44 +0000 (0:01:01.717) 0:01:22.908 **** 2025-09-20 09:48:19.370546 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:19.370557 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:19.370567 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:19.370578 | orchestrator | 2025-09-20 09:48:19.370589 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-20 09:48:19.370599 | orchestrator | Saturday 20 September 2025 09:48:06 +0000 (0:01:21.430) 0:02:44.339 **** 2025-09-20 09:48:19.370610 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:48:19.370621 | orchestrator | 2025-09-20 09:48:19.370632 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-20 09:48:19.370642 | orchestrator | Saturday 20 September 2025 09:48:06 +0000 (0:00:00.463) 0:02:44.802 **** 2025-09-20 09:48:19.370653 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:19.370664 | orchestrator | 2025-09-20 09:48:19.370679 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-20 09:48:19.370697 | orchestrator | Saturday 20 September 2025 09:48:09 +0000 (0:00:02.616) 0:02:47.419 **** 2025-09-20 09:48:19.370707 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:19.370718 | orchestrator | 2025-09-20 09:48:19.370729 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-20 09:48:19.370739 | orchestrator | Saturday 20 September 2025 09:48:11 +0000 (0:00:02.191) 0:02:49.611 **** 2025-09-20 09:48:19.370750 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:19.370761 | orchestrator | 2025-09-20 09:48:19.370772 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-20 09:48:19.370783 | orchestrator | Saturday 20 September 2025 09:48:14 +0000 (0:00:02.667) 0:02:52.278 **** 2025-09-20 09:48:19.370793 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:19.370804 | orchestrator | 2025-09-20 09:48:19.370815 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:48:19.370827 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:48:19.370839 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:48:19.370850 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 09:48:19.370861 | orchestrator | 2025-09-20 09:48:19.370872 | orchestrator | 2025-09-20 09:48:19.370883 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:48:19.370898 | orchestrator | Saturday 20 September 2025 09:48:16 +0000 (0:00:02.455) 0:02:54.734 **** 2025-09-20 09:48:19.370910 | orchestrator | =============================================================================== 2025-09-20 09:48:19.370921 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.43s 2025-09-20 09:48:19.370931 | orchestrator | opensearch : Restart opensearch container ------------------------------ 61.72s 2025-09-20 09:48:19.370942 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.04s 2025-09-20 09:48:19.370953 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.04s 2025-09-20 09:48:19.370963 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.67s 2025-09-20 09:48:19.370974 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.62s 2025-09-20 09:48:19.370985 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.46s 2025-09-20 09:48:19.370996 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.46s 2025-09-20 09:48:19.371006 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.26s 2025-09-20 09:48:19.371017 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.19s 2025-09-20 09:48:19.371028 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.17s 2025-09-20 09:48:19.371038 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.88s 2025-09-20 09:48:19.371049 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.24s 2025-09-20 09:48:19.371060 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.00s 2025-09-20 09:48:19.371070 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2025-09-20 09:48:19.371128 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.62s 2025-09-20 09:48:19.371141 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.59s 2025-09-20 09:48:19.371152 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-20 09:48:19.371163 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-09-20 09:48:19.371174 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-09-20 09:48:19.371192 | orchestrator | 2025-09-20 09:48:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:22.413525 | orchestrator | 2025-09-20 09:48:22 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:22.417452 | orchestrator | 2025-09-20 09:48:22 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:22.418069 | orchestrator | 2025-09-20 09:48:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:25.448554 | orchestrator | 2025-09-20 09:48:25 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:25.449192 | orchestrator | 2025-09-20 09:48:25 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:25.449226 | orchestrator | 2025-09-20 09:48:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:28.495471 | orchestrator | 2025-09-20 09:48:28 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:28.496977 | orchestrator | 2025-09-20 09:48:28 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:28.497008 | orchestrator | 2025-09-20 09:48:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:31.544186 | orchestrator | 2025-09-20 09:48:31 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state STARTED 2025-09-20 09:48:31.544291 | orchestrator | 2025-09-20 09:48:31 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:31.544305 | orchestrator | 2025-09-20 09:48:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:34.596999 | orchestrator | 2025-09-20 09:48:34 | INFO  | Task ab55e590-c4e5-453a-a968-1b8d31cc3afb is in state SUCCESS 2025-09-20 09:48:34.599004 | orchestrator | 2025-09-20 09:48:34.599041 | orchestrator | 2025-09-20 09:48:34.599053 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-20 09:48:34.599064 | orchestrator | 2025-09-20 09:48:34.599074 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-20 09:48:34.599084 | orchestrator | Saturday 20 September 2025 09:45:21 +0000 (0:00:00.102) 0:00:00.102 **** 2025-09-20 09:48:34.599125 | orchestrator | ok: [localhost] => { 2025-09-20 09:48:34.599137 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-20 09:48:34.599147 | orchestrator | } 2025-09-20 09:48:34.599157 | orchestrator | 2025-09-20 09:48:34.599166 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-20 09:48:34.599176 | orchestrator | Saturday 20 September 2025 09:45:21 +0000 (0:00:00.045) 0:00:00.147 **** 2025-09-20 09:48:34.599186 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-20 09:48:34.599197 | orchestrator | ...ignoring 2025-09-20 09:48:34.599208 | orchestrator | 2025-09-20 09:48:34.599218 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-20 09:48:34.599227 | orchestrator | Saturday 20 September 2025 09:45:24 +0000 (0:00:02.859) 0:00:03.007 **** 2025-09-20 09:48:34.599237 | orchestrator | skipping: [localhost] 2025-09-20 09:48:34.599246 | orchestrator | 2025-09-20 09:48:34.599256 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-20 09:48:34.599265 | orchestrator | Saturday 20 September 2025 09:45:24 +0000 (0:00:00.050) 0:00:03.057 **** 2025-09-20 09:48:34.599274 | orchestrator | ok: [localhost] 2025-09-20 09:48:34.599284 | orchestrator | 2025-09-20 09:48:34.599293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:48:34.599331 | orchestrator | 2025-09-20 09:48:34.599342 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:48:34.599351 | orchestrator | Saturday 20 September 2025 09:45:24 +0000 (0:00:00.153) 0:00:03.211 **** 2025-09-20 09:48:34.599474 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.599490 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.599500 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.599557 | orchestrator | 2025-09-20 09:48:34.599571 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:48:34.599580 | orchestrator | Saturday 20 September 2025 09:45:25 +0000 (0:00:00.298) 0:00:03.510 **** 2025-09-20 09:48:34.599590 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-20 09:48:34.599601 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-20 09:48:34.599612 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-20 09:48:34.599622 | orchestrator | 2025-09-20 09:48:34.599842 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-20 09:48:34.599858 | orchestrator | 2025-09-20 09:48:34.599870 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-20 09:48:34.599881 | orchestrator | Saturday 20 September 2025 09:45:25 +0000 (0:00:00.557) 0:00:04.068 **** 2025-09-20 09:48:34.599893 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 09:48:34.599904 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-20 09:48:34.599915 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-20 09:48:34.599926 | orchestrator | 2025-09-20 09:48:34.599937 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 09:48:34.599948 | orchestrator | Saturday 20 September 2025 09:45:26 +0000 (0:00:00.384) 0:00:04.452 **** 2025-09-20 09:48:34.599959 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:48:34.599970 | orchestrator | 2025-09-20 09:48:34.599979 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-20 09:48:34.599989 | orchestrator | Saturday 20 September 2025 09:45:26 +0000 (0:00:00.585) 0:00:05.038 **** 2025-09-20 09:48:34.600032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600152 | orchestrator | 2025-09-20 09:48:34.600173 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-20 09:48:34.600183 | orchestrator | Saturday 20 September 2025 09:45:30 +0000 (0:00:03.294) 0:00:08.332 **** 2025-09-20 09:48:34.600193 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.600204 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.600213 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.600223 | orchestrator | 2025-09-20 09:48:34.600233 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-20 09:48:34.600249 | orchestrator | Saturday 20 September 2025 09:45:30 +0000 (0:00:00.757) 0:00:09.089 **** 2025-09-20 09:48:34.600259 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.600269 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.600278 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.600288 | orchestrator | 2025-09-20 09:48:34.600297 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-20 09:48:34.600307 | orchestrator | Saturday 20 September 2025 09:45:32 +0000 (0:00:01.571) 0:00:10.661 **** 2025-09-20 09:48:34.600317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600375 | orchestrator | 2025-09-20 09:48:34.600385 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-20 09:48:34.600394 | orchestrator | Saturday 20 September 2025 09:45:36 +0000 (0:00:04.224) 0:00:14.886 **** 2025-09-20 09:48:34.600404 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.600413 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.600423 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.600433 | orchestrator | 2025-09-20 09:48:34.600442 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-20 09:48:34.600452 | orchestrator | Saturday 20 September 2025 09:45:37 +0000 (0:00:01.343) 0:00:16.229 **** 2025-09-20 09:48:34.600461 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.600471 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:34.600481 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:34.600490 | orchestrator | 2025-09-20 09:48:34.600500 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 09:48:34.600509 | orchestrator | Saturday 20 September 2025 09:45:42 +0000 (0:00:04.749) 0:00:20.979 **** 2025-09-20 09:48:34.600519 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:48:34.600529 | orchestrator | 2025-09-20 09:48:34.600538 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-20 09:48:34.600548 | orchestrator | Saturday 20 September 2025 09:45:43 +0000 (0:00:00.527) 0:00:21.507 **** 2025-09-20 09:48:34.600571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600590 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.600601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600613 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.600635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600652 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.600662 | orchestrator | 2025-09-20 09:48:34.600672 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-20 09:48:34.600683 | orchestrator | Saturday 20 September 2025 09:45:46 +0000 (0:00:03.250) 0:00:24.757 **** 2025-09-20 09:48:34.600694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600706 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.600742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600760 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.600770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600782 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.600792 | orchestrator | 2025-09-20 09:48:34.600801 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-20 09:48:34.600811 | orchestrator | Saturday 20 September 2025 09:45:49 +0000 (0:00:02.876) 0:00:27.634 **** 2025-09-20 09:48:34.600826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600849 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.600869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600881 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.600891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-20 09:48:34.600908 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.600918 | orchestrator | 2025-09-20 09:48:34.600932 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-20 09:48:34.600942 | orchestrator | Saturday 20 September 2025 09:45:52 +0000 (0:00:03.177) 0:00:30.812 **** 2025-09-20 09:48:34.600961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.600973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.601018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-20 09:48:34.601031 | orchestrator | 2025-09-20 09:48:34.601042 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-20 09:48:34.601052 | orchestrator | Saturday 20 September 2025 09:45:56 +0000 (0:00:03.785) 0:00:34.598 **** 2025-09-20 09:48:34.601062 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.601072 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:34.601082 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:34.601108 | orchestrator | 2025-09-20 09:48:34.601119 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-20 09:48:34.601129 | orchestrator | Saturday 20 September 2025 09:45:57 +0000 (0:00:00.978) 0:00:35.576 **** 2025-09-20 09:48:34.601140 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.601150 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.601160 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.601170 | orchestrator | 2025-09-20 09:48:34.601180 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-20 09:48:34.601191 | orchestrator | Saturday 20 September 2025 09:45:57 +0000 (0:00:00.563) 0:00:36.140 **** 2025-09-20 09:48:34.601201 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.601211 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.601221 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.601232 | orchestrator | 2025-09-20 09:48:34.601242 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-20 09:48:34.601252 | orchestrator | Saturday 20 September 2025 09:45:58 +0000 (0:00:00.372) 0:00:36.512 **** 2025-09-20 09:48:34.601264 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-20 09:48:34.601293 | orchestrator | ...ignoring 2025-09-20 09:48:34.601304 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-20 09:48:34.601314 | orchestrator | ...ignoring 2025-09-20 09:48:34.601324 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-20 09:48:34.601333 | orchestrator | ...ignoring 2025-09-20 09:48:34.601343 | orchestrator | 2025-09-20 09:48:34.601353 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-20 09:48:34.601362 | orchestrator | Saturday 20 September 2025 09:46:09 +0000 (0:00:10.906) 0:00:47.419 **** 2025-09-20 09:48:34.601372 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.601381 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.601391 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.601400 | orchestrator | 2025-09-20 09:48:34.601410 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-20 09:48:34.601419 | orchestrator | Saturday 20 September 2025 09:46:09 +0000 (0:00:00.506) 0:00:47.926 **** 2025-09-20 09:48:34.601429 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.601439 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.601448 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.601457 | orchestrator | 2025-09-20 09:48:34.601467 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-20 09:48:34.601477 | orchestrator | Saturday 20 September 2025 09:46:10 +0000 (0:00:00.696) 0:00:48.622 **** 2025-09-20 09:48:34.601486 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.601496 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.601505 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.601515 | orchestrator | 2025-09-20 09:48:34.601524 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-20 09:48:34.601534 | orchestrator | Saturday 20 September 2025 09:46:10 +0000 (0:00:00.457) 0:00:49.080 **** 2025-09-20 09:48:34.601544 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.601553 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.601563 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.601572 | orchestrator | 2025-09-20 09:48:34.601586 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-20 09:48:34.601596 | orchestrator | Saturday 20 September 2025 09:46:11 +0000 (0:00:00.472) 0:00:49.552 **** 2025-09-20 09:48:34.601606 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.601616 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.601625 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.601635 | orchestrator | 2025-09-20 09:48:34.601644 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-20 09:48:34.601654 | orchestrator | Saturday 20 September 2025 09:46:11 +0000 (0:00:00.451) 0:00:50.004 **** 2025-09-20 09:48:34.601668 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.601678 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.601688 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.601698 | orchestrator | 2025-09-20 09:48:34.601707 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 09:48:34.601717 | orchestrator | Saturday 20 September 2025 09:46:12 +0000 (0:00:00.845) 0:00:50.849 **** 2025-09-20 09:48:34.601726 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.601736 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.601745 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-20 09:48:34.601755 | orchestrator | 2025-09-20 09:48:34.601765 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-20 09:48:34.601774 | orchestrator | Saturday 20 September 2025 09:46:12 +0000 (0:00:00.371) 0:00:51.221 **** 2025-09-20 09:48:34.601783 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.601797 | orchestrator | 2025-09-20 09:48:34.601806 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-20 09:48:34.601816 | orchestrator | Saturday 20 September 2025 09:46:23 +0000 (0:00:10.609) 0:01:01.830 **** 2025-09-20 09:48:34.601825 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.601835 | orchestrator | 2025-09-20 09:48:34.601844 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 09:48:34.601854 | orchestrator | Saturday 20 September 2025 09:46:23 +0000 (0:00:00.131) 0:01:01.962 **** 2025-09-20 09:48:34.601863 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.601873 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.601882 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.601892 | orchestrator | 2025-09-20 09:48:34.601901 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-20 09:48:34.601911 | orchestrator | Saturday 20 September 2025 09:46:24 +0000 (0:00:01.015) 0:01:02.977 **** 2025-09-20 09:48:34.601920 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.601930 | orchestrator | 2025-09-20 09:48:34.601940 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-20 09:48:34.601949 | orchestrator | Saturday 20 September 2025 09:46:32 +0000 (0:00:07.439) 0:01:10.417 **** 2025-09-20 09:48:34.601959 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.601968 | orchestrator | 2025-09-20 09:48:34.601978 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-20 09:48:34.601987 | orchestrator | Saturday 20 September 2025 09:46:33 +0000 (0:00:01.577) 0:01:11.994 **** 2025-09-20 09:48:34.601997 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.602006 | orchestrator | 2025-09-20 09:48:34.602134 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-20 09:48:34.602152 | orchestrator | Saturday 20 September 2025 09:46:36 +0000 (0:00:02.513) 0:01:14.508 **** 2025-09-20 09:48:34.602161 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.602171 | orchestrator | 2025-09-20 09:48:34.602181 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-20 09:48:34.602190 | orchestrator | Saturday 20 September 2025 09:46:36 +0000 (0:00:00.133) 0:01:14.642 **** 2025-09-20 09:48:34.602200 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.602209 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.602219 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.602228 | orchestrator | 2025-09-20 09:48:34.602238 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-20 09:48:34.602247 | orchestrator | Saturday 20 September 2025 09:46:36 +0000 (0:00:00.332) 0:01:14.974 **** 2025-09-20 09:48:34.602256 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.602266 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-20 09:48:34.602275 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:34.602285 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:34.602294 | orchestrator | 2025-09-20 09:48:34.602304 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-20 09:48:34.602313 | orchestrator | skipping: no hosts matched 2025-09-20 09:48:34.602322 | orchestrator | 2025-09-20 09:48:34.602332 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-20 09:48:34.602341 | orchestrator | 2025-09-20 09:48:34.602351 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-20 09:48:34.602360 | orchestrator | Saturday 20 September 2025 09:46:37 +0000 (0:00:00.525) 0:01:15.500 **** 2025-09-20 09:48:34.602370 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:48:34.602379 | orchestrator | 2025-09-20 09:48:34.602389 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-20 09:48:34.602398 | orchestrator | Saturday 20 September 2025 09:46:56 +0000 (0:00:19.391) 0:01:34.892 **** 2025-09-20 09:48:34.602407 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.602417 | orchestrator | 2025-09-20 09:48:34.602434 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-20 09:48:34.602444 | orchestrator | Saturday 20 September 2025 09:47:17 +0000 (0:00:20.653) 0:01:55.545 **** 2025-09-20 09:48:34.602453 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.602462 | orchestrator | 2025-09-20 09:48:34.602472 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-20 09:48:34.602481 | orchestrator | 2025-09-20 09:48:34.602491 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-20 09:48:34.602500 | orchestrator | Saturday 20 September 2025 09:47:19 +0000 (0:00:02.180) 0:01:57.726 **** 2025-09-20 09:48:34.602510 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:48:34.602519 | orchestrator | 2025-09-20 09:48:34.602534 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-20 09:48:34.602544 | orchestrator | Saturday 20 September 2025 09:47:43 +0000 (0:00:23.991) 0:02:21.717 **** 2025-09-20 09:48:34.602554 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.602563 | orchestrator | 2025-09-20 09:48:34.602573 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-20 09:48:34.602582 | orchestrator | Saturday 20 September 2025 09:48:00 +0000 (0:00:16.574) 0:02:38.291 **** 2025-09-20 09:48:34.602592 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.602601 | orchestrator | 2025-09-20 09:48:34.602611 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-20 09:48:34.602620 | orchestrator | 2025-09-20 09:48:34.602637 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-20 09:48:34.602647 | orchestrator | Saturday 20 September 2025 09:48:02 +0000 (0:00:02.247) 0:02:40.539 **** 2025-09-20 09:48:34.602657 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.602666 | orchestrator | 2025-09-20 09:48:34.602675 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-20 09:48:34.602685 | orchestrator | Saturday 20 September 2025 09:48:18 +0000 (0:00:16.140) 0:02:56.680 **** 2025-09-20 09:48:34.602695 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.602704 | orchestrator | 2025-09-20 09:48:34.602714 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-20 09:48:34.602723 | orchestrator | Saturday 20 September 2025 09:48:18 +0000 (0:00:00.543) 0:02:57.224 **** 2025-09-20 09:48:34.602733 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.602742 | orchestrator | 2025-09-20 09:48:34.602751 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-20 09:48:34.602761 | orchestrator | 2025-09-20 09:48:34.602770 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-20 09:48:34.602780 | orchestrator | Saturday 20 September 2025 09:48:21 +0000 (0:00:02.722) 0:02:59.946 **** 2025-09-20 09:48:34.602789 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:48:34.602799 | orchestrator | 2025-09-20 09:48:34.602808 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-20 09:48:34.602818 | orchestrator | Saturday 20 September 2025 09:48:22 +0000 (0:00:00.534) 0:03:00.480 **** 2025-09-20 09:48:34.602827 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.602837 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.602846 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.602856 | orchestrator | 2025-09-20 09:48:34.602865 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-20 09:48:34.602875 | orchestrator | Saturday 20 September 2025 09:48:24 +0000 (0:00:02.245) 0:03:02.726 **** 2025-09-20 09:48:34.602884 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.602894 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.602903 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.602913 | orchestrator | 2025-09-20 09:48:34.602922 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-20 09:48:34.602932 | orchestrator | Saturday 20 September 2025 09:48:26 +0000 (0:00:02.119) 0:03:04.845 **** 2025-09-20 09:48:34.602941 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.602956 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.602965 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.602974 | orchestrator | 2025-09-20 09:48:34.602984 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-20 09:48:34.602993 | orchestrator | Saturday 20 September 2025 09:48:28 +0000 (0:00:02.090) 0:03:06.936 **** 2025-09-20 09:48:34.603003 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.603012 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.603022 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:48:34.603031 | orchestrator | 2025-09-20 09:48:34.603041 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-20 09:48:34.603050 | orchestrator | Saturday 20 September 2025 09:48:30 +0000 (0:00:02.124) 0:03:09.060 **** 2025-09-20 09:48:34.603060 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:48:34.603069 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:48:34.603079 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:48:34.603135 | orchestrator | 2025-09-20 09:48:34.603147 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-20 09:48:34.603157 | orchestrator | Saturday 20 September 2025 09:48:33 +0000 (0:00:02.909) 0:03:11.969 **** 2025-09-20 09:48:34.603166 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:48:34.603176 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:48:34.603185 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:48:34.603195 | orchestrator | 2025-09-20 09:48:34.603204 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:48:34.603214 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-20 09:48:34.603224 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-20 09:48:34.603235 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-20 09:48:34.603245 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-20 09:48:34.603254 | orchestrator | 2025-09-20 09:48:34.603264 | orchestrator | 2025-09-20 09:48:34.603274 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:48:34.603283 | orchestrator | Saturday 20 September 2025 09:48:34 +0000 (0:00:00.449) 0:03:12.419 **** 2025-09-20 09:48:34.603293 | orchestrator | =============================================================================== 2025-09-20 09:48:34.603302 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.38s 2025-09-20 09:48:34.603312 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.23s 2025-09-20 09:48:34.603321 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.14s 2025-09-20 09:48:34.603331 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2025-09-20 09:48:34.603340 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.61s 2025-09-20 09:48:34.603350 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.44s 2025-09-20 09:48:34.603365 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.75s 2025-09-20 09:48:34.603375 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.43s 2025-09-20 09:48:34.603384 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.22s 2025-09-20 09:48:34.603394 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.79s 2025-09-20 09:48:34.603404 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.29s 2025-09-20 09:48:34.603413 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.25s 2025-09-20 09:48:34.603430 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.18s 2025-09-20 09:48:34.603471 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.91s 2025-09-20 09:48:34.603481 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.88s 2025-09-20 09:48:34.603491 | orchestrator | Check MariaDB service --------------------------------------------------- 2.86s 2025-09-20 09:48:34.603501 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.72s 2025-09-20 09:48:34.603510 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.51s 2025-09-20 09:48:34.603520 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.25s 2025-09-20 09:48:34.603529 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.12s 2025-09-20 09:48:34.603539 | orchestrator | 2025-09-20 09:48:34 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:34.603548 | orchestrator | 2025-09-20 09:48:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:37.642636 | orchestrator | 2025-09-20 09:48:37 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:37.645513 | orchestrator | 2025-09-20 09:48:37 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:37.645540 | orchestrator | 2025-09-20 09:48:37 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:37.645551 | orchestrator | 2025-09-20 09:48:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:40.684364 | orchestrator | 2025-09-20 09:48:40 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:40.685650 | orchestrator | 2025-09-20 09:48:40 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:40.689055 | orchestrator | 2025-09-20 09:48:40 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:40.689199 | orchestrator | 2025-09-20 09:48:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:43.735354 | orchestrator | 2025-09-20 09:48:43 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:43.735779 | orchestrator | 2025-09-20 09:48:43 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:43.736759 | orchestrator | 2025-09-20 09:48:43 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:43.736782 | orchestrator | 2025-09-20 09:48:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:46.771518 | orchestrator | 2025-09-20 09:48:46 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:46.772476 | orchestrator | 2025-09-20 09:48:46 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:46.774074 | orchestrator | 2025-09-20 09:48:46 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:46.774129 | orchestrator | 2025-09-20 09:48:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:49.812403 | orchestrator | 2025-09-20 09:48:49 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:49.813149 | orchestrator | 2025-09-20 09:48:49 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:49.814540 | orchestrator | 2025-09-20 09:48:49 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:49.814816 | orchestrator | 2025-09-20 09:48:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:52.858913 | orchestrator | 2025-09-20 09:48:52 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:52.859734 | orchestrator | 2025-09-20 09:48:52 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:52.860756 | orchestrator | 2025-09-20 09:48:52 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:52.860785 | orchestrator | 2025-09-20 09:48:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:55.896934 | orchestrator | 2025-09-20 09:48:55 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:55.898840 | orchestrator | 2025-09-20 09:48:55 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:55.899844 | orchestrator | 2025-09-20 09:48:55 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:55.899879 | orchestrator | 2025-09-20 09:48:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:48:58.934539 | orchestrator | 2025-09-20 09:48:58 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:48:58.936019 | orchestrator | 2025-09-20 09:48:58 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:48:58.937215 | orchestrator | 2025-09-20 09:48:58 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:48:58.937529 | orchestrator | 2025-09-20 09:48:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:01.983774 | orchestrator | 2025-09-20 09:49:01 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:01.984291 | orchestrator | 2025-09-20 09:49:01 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:01.985033 | orchestrator | 2025-09-20 09:49:01 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:01.985332 | orchestrator | 2025-09-20 09:49:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:05.028823 | orchestrator | 2025-09-20 09:49:05 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:05.030766 | orchestrator | 2025-09-20 09:49:05 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:05.031781 | orchestrator | 2025-09-20 09:49:05 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:05.032037 | orchestrator | 2025-09-20 09:49:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:08.074213 | orchestrator | 2025-09-20 09:49:08 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:08.075362 | orchestrator | 2025-09-20 09:49:08 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:08.077062 | orchestrator | 2025-09-20 09:49:08 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:08.077348 | orchestrator | 2025-09-20 09:49:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:11.132659 | orchestrator | 2025-09-20 09:49:11 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:11.132769 | orchestrator | 2025-09-20 09:49:11 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:11.133804 | orchestrator | 2025-09-20 09:49:11 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:11.133827 | orchestrator | 2025-09-20 09:49:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:14.175842 | orchestrator | 2025-09-20 09:49:14 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:14.177137 | orchestrator | 2025-09-20 09:49:14 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:14.178498 | orchestrator | 2025-09-20 09:49:14 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:14.178595 | orchestrator | 2025-09-20 09:49:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:17.220717 | orchestrator | 2025-09-20 09:49:17 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:17.220829 | orchestrator | 2025-09-20 09:49:17 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:17.222307 | orchestrator | 2025-09-20 09:49:17 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:17.222335 | orchestrator | 2025-09-20 09:49:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:20.278442 | orchestrator | 2025-09-20 09:49:20 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:20.280142 | orchestrator | 2025-09-20 09:49:20 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:20.281617 | orchestrator | 2025-09-20 09:49:20 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:20.281791 | orchestrator | 2025-09-20 09:49:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:23.328572 | orchestrator | 2025-09-20 09:49:23 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:23.328831 | orchestrator | 2025-09-20 09:49:23 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:23.329900 | orchestrator | 2025-09-20 09:49:23 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:23.329924 | orchestrator | 2025-09-20 09:49:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:26.375363 | orchestrator | 2025-09-20 09:49:26 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:26.376126 | orchestrator | 2025-09-20 09:49:26 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:26.377703 | orchestrator | 2025-09-20 09:49:26 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:26.377727 | orchestrator | 2025-09-20 09:49:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:29.426687 | orchestrator | 2025-09-20 09:49:29 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:29.427634 | orchestrator | 2025-09-20 09:49:29 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:29.428831 | orchestrator | 2025-09-20 09:49:29 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:29.429165 | orchestrator | 2025-09-20 09:49:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:32.480976 | orchestrator | 2025-09-20 09:49:32 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:32.481048 | orchestrator | 2025-09-20 09:49:32 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:32.482546 | orchestrator | 2025-09-20 09:49:32 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:32.482556 | orchestrator | 2025-09-20 09:49:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:35.525632 | orchestrator | 2025-09-20 09:49:35 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:35.526450 | orchestrator | 2025-09-20 09:49:35 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:35.527606 | orchestrator | 2025-09-20 09:49:35 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:35.527860 | orchestrator | 2025-09-20 09:49:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:38.570633 | orchestrator | 2025-09-20 09:49:38 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state STARTED 2025-09-20 09:49:38.572808 | orchestrator | 2025-09-20 09:49:38 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:38.575489 | orchestrator | 2025-09-20 09:49:38 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:38.575622 | orchestrator | 2025-09-20 09:49:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:41.628015 | orchestrator | 2025-09-20 09:49:41.628146 | orchestrator | 2025-09-20 09:49:41.628156 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-20 09:49:41.628162 | orchestrator | 2025-09-20 09:49:41.628167 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-20 09:49:41.628172 | orchestrator | Saturday 20 September 2025 09:47:31 +0000 (0:00:00.613) 0:00:00.613 **** 2025-09-20 09:49:41.628178 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:49:41.628183 | orchestrator | 2025-09-20 09:49:41.628188 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-20 09:49:41.628193 | orchestrator | Saturday 20 September 2025 09:47:32 +0000 (0:00:00.627) 0:00:01.241 **** 2025-09-20 09:49:41.628198 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628204 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628208 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628213 | orchestrator | 2025-09-20 09:49:41.628217 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-20 09:49:41.628222 | orchestrator | Saturday 20 September 2025 09:47:33 +0000 (0:00:00.647) 0:00:01.889 **** 2025-09-20 09:49:41.628227 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628231 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628236 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628240 | orchestrator | 2025-09-20 09:49:41.628245 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-20 09:49:41.628261 | orchestrator | Saturday 20 September 2025 09:47:33 +0000 (0:00:00.308) 0:00:02.197 **** 2025-09-20 09:49:41.628293 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628299 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628303 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628308 | orchestrator | 2025-09-20 09:49:41.628313 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-20 09:49:41.628317 | orchestrator | Saturday 20 September 2025 09:47:34 +0000 (0:00:00.801) 0:00:02.999 **** 2025-09-20 09:49:41.628322 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628327 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628331 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628336 | orchestrator | 2025-09-20 09:49:41.628341 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-20 09:49:41.628345 | orchestrator | Saturday 20 September 2025 09:47:34 +0000 (0:00:00.325) 0:00:03.325 **** 2025-09-20 09:49:41.628350 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628355 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628359 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628364 | orchestrator | 2025-09-20 09:49:41.628369 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-20 09:49:41.628373 | orchestrator | Saturday 20 September 2025 09:47:34 +0000 (0:00:00.323) 0:00:03.648 **** 2025-09-20 09:49:41.628378 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628382 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628387 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628409 | orchestrator | 2025-09-20 09:49:41.628414 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-20 09:49:41.628419 | orchestrator | Saturday 20 September 2025 09:47:35 +0000 (0:00:00.309) 0:00:03.957 **** 2025-09-20 09:49:41.628424 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.628429 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.628433 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.628438 | orchestrator | 2025-09-20 09:49:41.628442 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-20 09:49:41.628447 | orchestrator | Saturday 20 September 2025 09:47:35 +0000 (0:00:00.588) 0:00:04.545 **** 2025-09-20 09:49:41.628452 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628456 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628461 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628465 | orchestrator | 2025-09-20 09:49:41.628470 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-20 09:49:41.628475 | orchestrator | Saturday 20 September 2025 09:47:36 +0000 (0:00:00.333) 0:00:04.879 **** 2025-09-20 09:49:41.628479 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:49:41.628484 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:49:41.628488 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:49:41.628493 | orchestrator | 2025-09-20 09:49:41.628668 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-20 09:49:41.628678 | orchestrator | Saturday 20 September 2025 09:47:36 +0000 (0:00:00.631) 0:00:05.511 **** 2025-09-20 09:49:41.628683 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628687 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628692 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628696 | orchestrator | 2025-09-20 09:49:41.628701 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-20 09:49:41.628705 | orchestrator | Saturday 20 September 2025 09:47:37 +0000 (0:00:00.433) 0:00:05.945 **** 2025-09-20 09:49:41.628710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:49:41.628714 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:49:41.628719 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:49:41.628724 | orchestrator | 2025-09-20 09:49:41.628728 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-20 09:49:41.628733 | orchestrator | Saturday 20 September 2025 09:47:39 +0000 (0:00:02.202) 0:00:08.148 **** 2025-09-20 09:49:41.628737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 09:49:41.628742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 09:49:41.628747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 09:49:41.628751 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.628756 | orchestrator | 2025-09-20 09:49:41.628760 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-20 09:49:41.628775 | orchestrator | Saturday 20 September 2025 09:47:39 +0000 (0:00:00.377) 0:00:08.525 **** 2025-09-20 09:49:41.628782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.628788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.628793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.628805 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.628810 | orchestrator | 2025-09-20 09:49:41.628815 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-20 09:49:41.628819 | orchestrator | Saturday 20 September 2025 09:47:40 +0000 (0:00:00.784) 0:00:09.310 **** 2025-09-20 09:49:41.628830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.628835 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.628840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.628845 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.628849 | orchestrator | 2025-09-20 09:49:41.628854 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-20 09:49:41.628858 | orchestrator | Saturday 20 September 2025 09:47:40 +0000 (0:00:00.175) 0:00:09.485 **** 2025-09-20 09:49:41.628865 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '72d11574b2a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-20 09:47:38.049063', 'end': '2025-09-20 09:47:38.087108', 'delta': '0:00:00.038045', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['72d11574b2a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-20 09:49:41.628872 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '38d4be8c9418', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-20 09:47:38.806767', 'end': '2025-09-20 09:47:38.849699', 'delta': '0:00:00.042932', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['38d4be8c9418'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-20 09:49:41.628882 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ed35346005b9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-20 09:47:39.336193', 'end': '2025-09-20 09:47:39.375634', 'delta': '0:00:00.039441', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ed35346005b9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-20 09:49:41.628892 | orchestrator | 2025-09-20 09:49:41.628896 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-20 09:49:41.628901 | orchestrator | Saturday 20 September 2025 09:47:41 +0000 (0:00:00.368) 0:00:09.853 **** 2025-09-20 09:49:41.628905 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.628910 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.628915 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.628919 | orchestrator | 2025-09-20 09:49:41.628924 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-20 09:49:41.628931 | orchestrator | Saturday 20 September 2025 09:47:41 +0000 (0:00:00.432) 0:00:10.286 **** 2025-09-20 09:49:41.628935 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-20 09:49:41.628940 | orchestrator | 2025-09-20 09:49:41.628945 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-20 09:49:41.628949 | orchestrator | Saturday 20 September 2025 09:47:43 +0000 (0:00:01.695) 0:00:11.981 **** 2025-09-20 09:49:41.628954 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.628958 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.628994 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629000 | orchestrator | 2025-09-20 09:49:41.629005 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-20 09:49:41.629009 | orchestrator | Saturday 20 September 2025 09:47:43 +0000 (0:00:00.319) 0:00:12.301 **** 2025-09-20 09:49:41.629014 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629018 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629023 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629027 | orchestrator | 2025-09-20 09:49:41.629032 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 09:49:41.629036 | orchestrator | Saturday 20 September 2025 09:47:44 +0000 (0:00:00.462) 0:00:12.764 **** 2025-09-20 09:49:41.629041 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629046 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629050 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629055 | orchestrator | 2025-09-20 09:49:41.629255 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-20 09:49:41.629265 | orchestrator | Saturday 20 September 2025 09:47:44 +0000 (0:00:00.499) 0:00:13.263 **** 2025-09-20 09:49:41.629270 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.629275 | orchestrator | 2025-09-20 09:49:41.629279 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-20 09:49:41.629284 | orchestrator | Saturday 20 September 2025 09:47:44 +0000 (0:00:00.133) 0:00:13.396 **** 2025-09-20 09:49:41.629288 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629293 | orchestrator | 2025-09-20 09:49:41.629297 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-20 09:49:41.629301 | orchestrator | Saturday 20 September 2025 09:47:44 +0000 (0:00:00.231) 0:00:13.628 **** 2025-09-20 09:49:41.629306 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629310 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629315 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629319 | orchestrator | 2025-09-20 09:49:41.629324 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-20 09:49:41.629328 | orchestrator | Saturday 20 September 2025 09:47:45 +0000 (0:00:00.313) 0:00:13.941 **** 2025-09-20 09:49:41.629333 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629337 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629342 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629346 | orchestrator | 2025-09-20 09:49:41.629351 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-20 09:49:41.629362 | orchestrator | Saturday 20 September 2025 09:47:45 +0000 (0:00:00.344) 0:00:14.286 **** 2025-09-20 09:49:41.629366 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629371 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629375 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629380 | orchestrator | 2025-09-20 09:49:41.629384 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-20 09:49:41.629389 | orchestrator | Saturday 20 September 2025 09:47:46 +0000 (0:00:00.437) 0:00:14.724 **** 2025-09-20 09:49:41.629393 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629398 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629402 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629407 | orchestrator | 2025-09-20 09:49:41.629411 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-20 09:49:41.629416 | orchestrator | Saturday 20 September 2025 09:47:46 +0000 (0:00:00.284) 0:00:15.008 **** 2025-09-20 09:49:41.629420 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629425 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629455 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629461 | orchestrator | 2025-09-20 09:49:41.629466 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-20 09:49:41.629470 | orchestrator | Saturday 20 September 2025 09:47:46 +0000 (0:00:00.283) 0:00:15.291 **** 2025-09-20 09:49:41.629475 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629479 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629484 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629489 | orchestrator | 2025-09-20 09:49:41.629493 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-20 09:49:41.629513 | orchestrator | Saturday 20 September 2025 09:47:46 +0000 (0:00:00.308) 0:00:15.600 **** 2025-09-20 09:49:41.629518 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629523 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629528 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629532 | orchestrator | 2025-09-20 09:49:41.629537 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-20 09:49:41.629541 | orchestrator | Saturday 20 September 2025 09:47:47 +0000 (0:00:00.422) 0:00:16.022 **** 2025-09-20 09:49:41.629547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22', 'dm-uuid-LVM-DnxaRx4DprVvTXzxq8pMkQFvz3WaKE38Lyl8FSyIkpr1S80xWH0OiUpXNiW0RKeS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a', 'dm-uuid-LVM-07jPszdudCYLb2kASjjnJtPSDZyJdJjQhxeBPSpwXeHMqnT4tfVmcxh3U6deVX6u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c', 'dm-uuid-LVM-c3et89XgjnYzPyeJL9a81ueXLiENcEOzlZVVYIoRqRR2d3uSdOIpiK7du2GL1b3C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N2ncj8-uyRk-vw9F-J5nI-U1nn-KYce-KddKqt', 'scsi-0QEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9', 'scsi-SQEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9', 'dm-uuid-LVM-03WnYp6gxYyqDFetCQKqxkq0bm37VEwg0Vwjfen20ut1ZR6SH0cF2Nawnj8KCybv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TJkZzW-8Taz-pYNg-NNzH-IGej-j2Gt-WcbLkR', 'scsi-0QEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650', 'scsi-SQEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b', 'scsi-SQEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629771 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.629777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O982OA-wzSU-Y1e0-LRH0-wgZa-u0Jn-23wVP7', 'scsi-0QEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a', 'scsi-SQEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cABhdL-iJOq-eVrR-Dqgx-nq7q-XgIR-oWwkmG', 'scsi-0QEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9', 'scsi-SQEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26', 'scsi-SQEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629803 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.629808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126', 'dm-uuid-LVM-SRbLLW0bcwwOR0uc4hmvTM1QEiG0HhbjLT2nH0SZBAt0CHunNFLdADyuLankUCNB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4', 'dm-uuid-LVM-wXEP2xjRPSa6cJb6tnE8v9DVUuVIBoookWjzCnwiNfdLk3lO02TOwJ410DYgvQQp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629855 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-20 09:49:41.629872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vDKqvI-weOD-MTIA-gDzz-9iik-tIGJ-YonfAo', 'scsi-0QEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57', 'scsi-SQEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wrW4BB-ofv8-nPOQ-UXqq-qjVP-7pjM-mMJve7', 'scsi-0QEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5', 'scsi-SQEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d', 'scsi-SQEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-20 09:49:41.629905 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.629909 | orchestrator | 2025-09-20 09:49:41.629914 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-20 09:49:41.629919 | orchestrator | Saturday 20 September 2025 09:47:47 +0000 (0:00:00.563) 0:00:16.586 **** 2025-09-20 09:49:41.629926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22', 'dm-uuid-LVM-DnxaRx4DprVvTXzxq8pMkQFvz3WaKE38Lyl8FSyIkpr1S80xWH0OiUpXNiW0RKeS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629935 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a', 'dm-uuid-LVM-07jPszdudCYLb2kASjjnJtPSDZyJdJjQhxeBPSpwXeHMqnT4tfVmcxh3U6deVX6u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629982 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c', 'dm-uuid-LVM-c3et89XgjnYzPyeJL9a81ueXLiENcEOzlZVVYIoRqRR2d3uSdOIpiK7du2GL1b3C'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629986 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.629991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9', 'dm-uuid-LVM-03WnYp6gxYyqDFetCQKqxkq0bm37VEwg0Vwjfen20ut1ZR6SH0cF2Nawnj8KCybv'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630002 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630010 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16', 'scsi-SQEMU_QEMU_HARDDISK_b9271ccd-95e0-4362-9036-036ce1f0e590-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--0cf3001a--a2bc--51f5--b2f0--80e0839adf22-osd--block--0cf3001a--a2bc--51f5--b2f0--80e0839adf22'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-N2ncj8-uyRk-vw9F-J5nI-U1nn-KYce-KddKqt', 'scsi-0QEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9', 'scsi-SQEMU_QEMU_HARDDISK_41170e96-3e47-41ac-ae12-e293d14045c9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f5012b99--8722--5cc3--9d11--b95ce6d4943a-osd--block--f5012b99--8722--5cc3--9d11--b95ce6d4943a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TJkZzW-8Taz-pYNg-NNzH-IGej-j2Gt-WcbLkR', 'scsi-0QEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650', 'scsi-SQEMU_QEMU_HARDDISK_fb2cb8e7-ed33-4daf-81ac-3030de87c650'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630138 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b', 'scsi-SQEMU_QEMU_HARDDISK_e93e8b04-9e7b-45a5-9708-eecfe0538f8b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630167 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630175 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630186 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16', 'scsi-SQEMU_QEMU_HARDDISK_5810f150-0213-4ab8-9336-aa67cac6df2b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630207 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6319afae--7c48--5c70--87a8--62ab4a9b6a4c-osd--block--6319afae--7c48--5c70--87a8--62ab4a9b6a4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O982OA-wzSU-Y1e0-LRH0-wgZa-u0Jn-23wVP7', 'scsi-0QEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a', 'scsi-SQEMU_QEMU_HARDDISK_a4838d5a-524e-41b4-858a-00cf9cd1291a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126', 'dm-uuid-LVM-SRbLLW0bcwwOR0uc4hmvTM1QEiG0HhbjLT2nH0SZBAt0CHunNFLdADyuLankUCNB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630219 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--606172b3--e8d7--56e6--aaf4--86ed1800c0e9-osd--block--606172b3--e8d7--56e6--aaf4--86ed1800c0e9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cABhdL-iJOq-eVrR-Dqgx-nq7q-XgIR-oWwkmG', 'scsi-0QEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9', 'scsi-SQEMU_QEMU_HARDDISK_e1dd809b-bff8-46fb-aa79-1858a713f2a9'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630228 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4', 'dm-uuid-LVM-wXEP2xjRPSa6cJb6tnE8v9DVUuVIBoookWjzCnwiNfdLk3lO02TOwJ410DYgvQQp'], 'l2025-09-20 09:49:41 | INFO  | Task a450a65d-6314-44a5-aba2-b113d326f039 is in state SUCCESS 2025-09-20 09:49:41.630269 | orchestrator | abels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630280 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26', 'scsi-SQEMU_QEMU_HARDDISK_c2415bc7-a1cc-4fd3-8755-923259240f26'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630296 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630301 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630325 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630339 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_faf2d779-3741-4333-9a6a-67d0ebd0d2e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630376 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a0e476ce--8dbb--5cb3--b205--e96c67f25126-osd--block--a0e476ce--8dbb--5cb3--b205--e96c67f25126'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vDKqvI-weOD-MTIA-gDzz-9iik-tIGJ-YonfAo', 'scsi-0QEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57', 'scsi-SQEMU_QEMU_HARDDISK_358b31db-4e32-4fff-a843-fcadc4546d57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630382 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--54d5d251--b5b9--5293--b72e--54d20a6e98e4-osd--block--54d5d251--b5b9--5293--b72e--54d20a6e98e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wrW4BB-ofv8-nPOQ-UXqq-qjVP-7pjM-mMJve7', 'scsi-0QEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5', 'scsi-SQEMU_QEMU_HARDDISK_91334aab-4987-4e71-91fe-c625707f6cc5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630387 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d', 'scsi-SQEMU_QEMU_HARDDISK_a6b9e5ea-ad72-4152-982a-d01dd494947d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-20-08-56-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-20 09:49:41.630406 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630411 | orchestrator | 2025-09-20 09:49:41.630417 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-20 09:49:41.630422 | orchestrator | Saturday 20 September 2025 09:47:48 +0000 (0:00:00.523) 0:00:17.110 **** 2025-09-20 09:49:41.630427 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.630433 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.630438 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.630443 | orchestrator | 2025-09-20 09:49:41.630449 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-20 09:49:41.630454 | orchestrator | Saturday 20 September 2025 09:47:49 +0000 (0:00:00.638) 0:00:17.748 **** 2025-09-20 09:49:41.630459 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.630464 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.630469 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.630474 | orchestrator | 2025-09-20 09:49:41.630492 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 09:49:41.630498 | orchestrator | Saturday 20 September 2025 09:47:49 +0000 (0:00:00.387) 0:00:18.135 **** 2025-09-20 09:49:41.630502 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.630507 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.630511 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.630516 | orchestrator | 2025-09-20 09:49:41.630520 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 09:49:41.630525 | orchestrator | Saturday 20 September 2025 09:47:50 +0000 (0:00:00.603) 0:00:18.739 **** 2025-09-20 09:49:41.630529 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630534 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630538 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630543 | orchestrator | 2025-09-20 09:49:41.630547 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-20 09:49:41.630552 | orchestrator | Saturday 20 September 2025 09:47:50 +0000 (0:00:00.263) 0:00:19.002 **** 2025-09-20 09:49:41.630556 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630561 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630565 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630570 | orchestrator | 2025-09-20 09:49:41.630574 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-20 09:49:41.630579 | orchestrator | Saturday 20 September 2025 09:47:50 +0000 (0:00:00.362) 0:00:19.365 **** 2025-09-20 09:49:41.630583 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630588 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630592 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630597 | orchestrator | 2025-09-20 09:49:41.630601 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-20 09:49:41.630609 | orchestrator | Saturday 20 September 2025 09:47:51 +0000 (0:00:00.418) 0:00:19.784 **** 2025-09-20 09:49:41.630614 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-20 09:49:41.630619 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-20 09:49:41.630623 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-20 09:49:41.630628 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-20 09:49:41.630632 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-20 09:49:41.630637 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-20 09:49:41.630641 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-20 09:49:41.630646 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-20 09:49:41.630650 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-20 09:49:41.630655 | orchestrator | 2025-09-20 09:49:41.630660 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-20 09:49:41.630672 | orchestrator | Saturday 20 September 2025 09:47:51 +0000 (0:00:00.843) 0:00:20.627 **** 2025-09-20 09:49:41.630677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-20 09:49:41.630682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-20 09:49:41.630686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-20 09:49:41.630691 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-20 09:49:41.630699 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-20 09:49:41.630712 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-20 09:49:41.630716 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630721 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-20 09:49:41.630733 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-20 09:49:41.630738 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-20 09:49:41.630742 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630747 | orchestrator | 2025-09-20 09:49:41.630751 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-20 09:49:41.630756 | orchestrator | Saturday 20 September 2025 09:47:52 +0000 (0:00:00.306) 0:00:20.934 **** 2025-09-20 09:49:41.630768 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:49:41.630773 | orchestrator | 2025-09-20 09:49:41.630777 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-20 09:49:41.630783 | orchestrator | Saturday 20 September 2025 09:47:52 +0000 (0:00:00.568) 0:00:21.502 **** 2025-09-20 09:49:41.630799 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630804 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630809 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630813 | orchestrator | 2025-09-20 09:49:41.630818 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-20 09:49:41.630830 | orchestrator | Saturday 20 September 2025 09:47:53 +0000 (0:00:00.288) 0:00:21.791 **** 2025-09-20 09:49:41.630835 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630839 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630844 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630848 | orchestrator | 2025-09-20 09:49:41.630853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-20 09:49:41.630857 | orchestrator | Saturday 20 September 2025 09:47:53 +0000 (0:00:00.301) 0:00:22.093 **** 2025-09-20 09:49:41.630870 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630875 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.630879 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:49:41.630884 | orchestrator | 2025-09-20 09:49:41.630888 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-20 09:49:41.630904 | orchestrator | Saturday 20 September 2025 09:47:53 +0000 (0:00:00.326) 0:00:22.420 **** 2025-09-20 09:49:41.630909 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.630914 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.630918 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.630923 | orchestrator | 2025-09-20 09:49:41.630937 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-20 09:49:41.630942 | orchestrator | Saturday 20 September 2025 09:47:54 +0000 (0:00:00.608) 0:00:23.028 **** 2025-09-20 09:49:41.630947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:49:41.630951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:49:41.630956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:49:41.630968 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.630973 | orchestrator | 2025-09-20 09:49:41.630977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-20 09:49:41.630982 | orchestrator | Saturday 20 September 2025 09:47:54 +0000 (0:00:00.380) 0:00:23.409 **** 2025-09-20 09:49:41.630986 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:49:41.630998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:49:41.631003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:49:41.631007 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.631012 | orchestrator | 2025-09-20 09:49:41.631016 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-20 09:49:41.631028 | orchestrator | Saturday 20 September 2025 09:47:55 +0000 (0:00:00.381) 0:00:23.790 **** 2025-09-20 09:49:41.631033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-20 09:49:41.631037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-20 09:49:41.631042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-20 09:49:41.631046 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.631051 | orchestrator | 2025-09-20 09:49:41.631063 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-20 09:49:41.631068 | orchestrator | Saturday 20 September 2025 09:47:55 +0000 (0:00:00.372) 0:00:24.162 **** 2025-09-20 09:49:41.631073 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:49:41.631077 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:49:41.631081 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:49:41.631093 | orchestrator | 2025-09-20 09:49:41.631098 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-20 09:49:41.631102 | orchestrator | Saturday 20 September 2025 09:47:55 +0000 (0:00:00.337) 0:00:24.500 **** 2025-09-20 09:49:41.631107 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-20 09:49:41.631111 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-20 09:49:41.631137 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-20 09:49:41.631142 | orchestrator | 2025-09-20 09:49:41.631147 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-20 09:49:41.631151 | orchestrator | Saturday 20 September 2025 09:47:56 +0000 (0:00:00.569) 0:00:25.069 **** 2025-09-20 09:49:41.631156 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:49:41.631160 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:49:41.631165 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:49:41.631170 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 09:49:41.631182 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 09:49:41.631186 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 09:49:41.631191 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 09:49:41.631199 | orchestrator | 2025-09-20 09:49:41.631203 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-20 09:49:41.631208 | orchestrator | Saturday 20 September 2025 09:47:57 +0000 (0:00:01.010) 0:00:26.079 **** 2025-09-20 09:49:41.631219 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-20 09:49:41.631224 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-20 09:49:41.631229 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-20 09:49:41.631233 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-20 09:49:41.631283 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-20 09:49:41.631288 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-20 09:49:41.631296 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-20 09:49:41.631300 | orchestrator | 2025-09-20 09:49:41.631305 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-20 09:49:41.631309 | orchestrator | Saturday 20 September 2025 09:47:59 +0000 (0:00:02.025) 0:00:28.105 **** 2025-09-20 09:49:41.631314 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:49:41.631318 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:49:41.631323 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-20 09:49:41.631327 | orchestrator | 2025-09-20 09:49:41.631332 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-20 09:49:41.631337 | orchestrator | Saturday 20 September 2025 09:47:59 +0000 (0:00:00.344) 0:00:28.449 **** 2025-09-20 09:49:41.631342 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:49:41.631350 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:49:41.631355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:49:41.631360 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:49:41.631365 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-20 09:49:41.631369 | orchestrator | 2025-09-20 09:49:41.631374 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-20 09:49:41.631379 | orchestrator | Saturday 20 September 2025 09:48:45 +0000 (0:00:46.000) 0:01:14.450 **** 2025-09-20 09:49:41.631383 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631388 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631392 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631405 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631410 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631414 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-20 09:49:41.631419 | orchestrator | 2025-09-20 09:49:41.631423 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-20 09:49:41.631428 | orchestrator | Saturday 20 September 2025 09:49:10 +0000 (0:00:24.500) 0:01:38.950 **** 2025-09-20 09:49:41.631433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631437 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631442 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631446 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631451 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631455 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631460 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-20 09:49:41.631464 | orchestrator | 2025-09-20 09:49:41.631469 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-20 09:49:41.631473 | orchestrator | Saturday 20 September 2025 09:49:22 +0000 (0:00:12.365) 0:01:51.316 **** 2025-09-20 09:49:41.631478 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631483 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:49:41.631487 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:49:41.631492 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631499 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:49:41.631504 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:49:41.631509 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631513 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:49:41.631518 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:49:41.631522 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631527 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:49:41.631531 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:49:41.631536 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631540 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:49:41.631545 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:49:41.631550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-20 09:49:41.631557 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-20 09:49:41.631562 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-20 09:49:41.631567 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-20 09:49:41.631571 | orchestrator | 2025-09-20 09:49:41.631576 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:49:41.631581 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-20 09:49:41.631587 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 09:49:41.631595 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-20 09:49:41.631600 | orchestrator | 2025-09-20 09:49:41.631604 | orchestrator | 2025-09-20 09:49:41.631609 | orchestrator | 2025-09-20 09:49:41.631613 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:49:41.631618 | orchestrator | Saturday 20 September 2025 09:49:40 +0000 (0:00:17.979) 0:02:09.295 **** 2025-09-20 09:49:41.631622 | orchestrator | =============================================================================== 2025-09-20 09:49:41.631627 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.00s 2025-09-20 09:49:41.631631 | orchestrator | generate keys ---------------------------------------------------------- 24.50s 2025-09-20 09:49:41.631636 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.98s 2025-09-20 09:49:41.631641 | orchestrator | get keys from monitors ------------------------------------------------- 12.37s 2025-09-20 09:49:41.631645 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2025-09-20 09:49:41.631650 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.03s 2025-09-20 09:49:41.631654 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.70s 2025-09-20 09:49:41.631659 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2025-09-20 09:49:41.631664 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-09-20 09:49:41.631668 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-09-20 09:49:41.631673 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-09-20 09:49:41.631677 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2025-09-20 09:49:41.631682 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-09-20 09:49:41.631686 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2025-09-20 09:49:41.631691 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.63s 2025-09-20 09:49:41.631695 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.61s 2025-09-20 09:49:41.631700 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.60s 2025-09-20 09:49:41.631704 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.59s 2025-09-20 09:49:41.631709 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.57s 2025-09-20 09:49:41.631713 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.57s 2025-09-20 09:49:41.631718 | orchestrator | 2025-09-20 09:49:41 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:41.631723 | orchestrator | 2025-09-20 09:49:41 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:41.631727 | orchestrator | 2025-09-20 09:49:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:44.691983 | orchestrator | 2025-09-20 09:49:44 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:49:44.693726 | orchestrator | 2025-09-20 09:49:44 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:44.696999 | orchestrator | 2025-09-20 09:49:44 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:44.697631 | orchestrator | 2025-09-20 09:49:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:47.745227 | orchestrator | 2025-09-20 09:49:47 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:49:47.746977 | orchestrator | 2025-09-20 09:49:47 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:47.748438 | orchestrator | 2025-09-20 09:49:47 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:47.748559 | orchestrator | 2025-09-20 09:49:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:50.798560 | orchestrator | 2025-09-20 09:49:50 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:49:50.802404 | orchestrator | 2025-09-20 09:49:50 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:50.805827 | orchestrator | 2025-09-20 09:49:50 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:50.806797 | orchestrator | 2025-09-20 09:49:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:53.854918 | orchestrator | 2025-09-20 09:49:53 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:49:53.855022 | orchestrator | 2025-09-20 09:49:53 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:53.856304 | orchestrator | 2025-09-20 09:49:53 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:53.856329 | orchestrator | 2025-09-20 09:49:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:56.895105 | orchestrator | 2025-09-20 09:49:56 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:49:56.899565 | orchestrator | 2025-09-20 09:49:56 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:56.901437 | orchestrator | 2025-09-20 09:49:56 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:56.902170 | orchestrator | 2025-09-20 09:49:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:49:59.956878 | orchestrator | 2025-09-20 09:49:59 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:49:59.959448 | orchestrator | 2025-09-20 09:49:59 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:49:59.961418 | orchestrator | 2025-09-20 09:49:59 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:49:59.961648 | orchestrator | 2025-09-20 09:49:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:03.012399 | orchestrator | 2025-09-20 09:50:03 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:50:03.013897 | orchestrator | 2025-09-20 09:50:03 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:03.015936 | orchestrator | 2025-09-20 09:50:03 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:03.016308 | orchestrator | 2025-09-20 09:50:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:06.062410 | orchestrator | 2025-09-20 09:50:06 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:50:06.064257 | orchestrator | 2025-09-20 09:50:06 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:06.066487 | orchestrator | 2025-09-20 09:50:06 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:06.066522 | orchestrator | 2025-09-20 09:50:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:09.109520 | orchestrator | 2025-09-20 09:50:09 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state STARTED 2025-09-20 09:50:09.111545 | orchestrator | 2025-09-20 09:50:09 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:09.115313 | orchestrator | 2025-09-20 09:50:09 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:09.116148 | orchestrator | 2025-09-20 09:50:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:12.165903 | orchestrator | 2025-09-20 09:50:12 | INFO  | Task fa0a7a45-5f76-4180-9391-be6872260245 is in state SUCCESS 2025-09-20 09:50:12.167106 | orchestrator | 2025-09-20 09:50:12 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:12.170295 | orchestrator | 2025-09-20 09:50:12 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:12.173234 | orchestrator | 2025-09-20 09:50:12 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:12.173581 | orchestrator | 2025-09-20 09:50:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:15.216963 | orchestrator | 2025-09-20 09:50:15 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:15.217064 | orchestrator | 2025-09-20 09:50:15 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:15.218065 | orchestrator | 2025-09-20 09:50:15 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:15.218094 | orchestrator | 2025-09-20 09:50:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:18.260397 | orchestrator | 2025-09-20 09:50:18 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:18.262295 | orchestrator | 2025-09-20 09:50:18 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:18.262326 | orchestrator | 2025-09-20 09:50:18 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:18.262338 | orchestrator | 2025-09-20 09:50:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:21.299327 | orchestrator | 2025-09-20 09:50:21 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:21.300500 | orchestrator | 2025-09-20 09:50:21 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state STARTED 2025-09-20 09:50:21.302640 | orchestrator | 2025-09-20 09:50:21 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:21.302705 | orchestrator | 2025-09-20 09:50:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:24.347560 | orchestrator | 2025-09-20 09:50:24 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:24.350280 | orchestrator | 2025-09-20 09:50:24 | INFO  | Task 4dd97186-5569-4b43-a8a0-a36d837cfb3d is in state SUCCESS 2025-09-20 09:50:24.351849 | orchestrator | 2025-09-20 09:50:24.351919 | orchestrator | 2025-09-20 09:50:24.352288 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-20 09:50:24.352304 | orchestrator | 2025-09-20 09:50:24.352316 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-20 09:50:24.352328 | orchestrator | Saturday 20 September 2025 09:49:45 +0000 (0:00:00.164) 0:00:00.164 **** 2025-09-20 09:50:24.352339 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-20 09:50:24.352352 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352363 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352374 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 09:50:24.352385 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352429 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-20 09:50:24.352449 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-20 09:50:24.352467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-20 09:50:24.352486 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-20 09:50:24.352505 | orchestrator | 2025-09-20 09:50:24.352524 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-20 09:50:24.352544 | orchestrator | Saturday 20 September 2025 09:49:49 +0000 (0:00:04.177) 0:00:04.342 **** 2025-09-20 09:50:24.352563 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-20 09:50:24.352582 | orchestrator | 2025-09-20 09:50:24.352594 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-20 09:50:24.352605 | orchestrator | Saturday 20 September 2025 09:49:50 +0000 (0:00:00.972) 0:00:05.315 **** 2025-09-20 09:50:24.352616 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-20 09:50:24.352627 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352638 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352649 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 09:50:24.352660 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352670 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-20 09:50:24.352681 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-20 09:50:24.352691 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-20 09:50:24.352702 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-20 09:50:24.352712 | orchestrator | 2025-09-20 09:50:24.352723 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-20 09:50:24.352734 | orchestrator | Saturday 20 September 2025 09:50:03 +0000 (0:00:13.374) 0:00:18.689 **** 2025-09-20 09:50:24.352745 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-20 09:50:24.352755 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352766 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352816 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 09:50:24.352828 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-20 09:50:24.352839 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-20 09:50:24.352850 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-20 09:50:24.352861 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-20 09:50:24.352887 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-20 09:50:24.352901 | orchestrator | 2025-09-20 09:50:24.352913 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:50:24.352926 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:50:24.352941 | orchestrator | 2025-09-20 09:50:24.352953 | orchestrator | 2025-09-20 09:50:24.352965 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:50:24.352978 | orchestrator | Saturday 20 September 2025 09:50:10 +0000 (0:00:06.812) 0:00:25.502 **** 2025-09-20 09:50:24.352989 | orchestrator | =============================================================================== 2025-09-20 09:50:24.353014 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.37s 2025-09-20 09:50:24.353026 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.81s 2025-09-20 09:50:24.353038 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.18s 2025-09-20 09:50:24.353051 | orchestrator | Create share directory -------------------------------------------------- 0.97s 2025-09-20 09:50:24.353063 | orchestrator | 2025-09-20 09:50:24.353075 | orchestrator | 2025-09-20 09:50:24.353088 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:50:24.353100 | orchestrator | 2025-09-20 09:50:24.353124 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:50:24.353168 | orchestrator | Saturday 20 September 2025 09:48:38 +0000 (0:00:00.271) 0:00:00.271 **** 2025-09-20 09:50:24.353181 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.353194 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.353206 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.353218 | orchestrator | 2025-09-20 09:50:24.353231 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:50:24.353242 | orchestrator | Saturday 20 September 2025 09:48:38 +0000 (0:00:00.304) 0:00:00.576 **** 2025-09-20 09:50:24.353253 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-20 09:50:24.353264 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-20 09:50:24.353274 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-20 09:50:24.353285 | orchestrator | 2025-09-20 09:50:24.353296 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-20 09:50:24.353307 | orchestrator | 2025-09-20 09:50:24.353317 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 09:50:24.353328 | orchestrator | Saturday 20 September 2025 09:48:39 +0000 (0:00:00.436) 0:00:01.012 **** 2025-09-20 09:50:24.353339 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:50:24.353350 | orchestrator | 2025-09-20 09:50:24.353360 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-20 09:50:24.353371 | orchestrator | Saturday 20 September 2025 09:48:39 +0000 (0:00:00.531) 0:00:01.543 **** 2025-09-20 09:50:24.353394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.353434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.353455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.353475 | orchestrator | 2025-09-20 09:50:24.353486 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-20 09:50:24.353497 | orchestrator | Saturday 20 September 2025 09:48:40 +0000 (0:00:01.113) 0:00:02.657 **** 2025-09-20 09:50:24.353508 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.353519 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.353529 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.353540 | orchestrator | 2025-09-20 09:50:24.353551 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 09:50:24.353561 | orchestrator | Saturday 20 September 2025 09:48:41 +0000 (0:00:00.491) 0:00:03.148 **** 2025-09-20 09:50:24.353572 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-20 09:50:24.353583 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-20 09:50:24.353600 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-20 09:50:24.353611 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-20 09:50:24.353622 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-20 09:50:24.353633 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-20 09:50:24.353643 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-20 09:50:24.353654 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-20 09:50:24.353665 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-20 09:50:24.353675 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-20 09:50:24.353686 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-20 09:50:24.353697 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-20 09:50:24.353707 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-20 09:50:24.353718 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-20 09:50:24.353729 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-20 09:50:24.353740 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-20 09:50:24.353750 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-20 09:50:24.353761 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-20 09:50:24.353772 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-20 09:50:24.353782 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-20 09:50:24.353793 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-20 09:50:24.353804 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-20 09:50:24.353814 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-20 09:50:24.353825 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-20 09:50:24.353843 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-20 09:50:24.353855 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-20 09:50:24.353866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-20 09:50:24.353877 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-20 09:50:24.353888 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-20 09:50:24.353899 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-20 09:50:24.353910 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-20 09:50:24.353925 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-20 09:50:24.353936 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-20 09:50:24.353947 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-20 09:50:24.353958 | orchestrator | 2025-09-20 09:50:24.353969 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.353980 | orchestrator | Saturday 20 September 2025 09:48:42 +0000 (0:00:00.726) 0:00:03.875 **** 2025-09-20 09:50:24.353991 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.354001 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.354012 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.354068 | orchestrator | 2025-09-20 09:50:24.354079 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.354090 | orchestrator | Saturday 20 September 2025 09:48:42 +0000 (0:00:00.302) 0:00:04.177 **** 2025-09-20 09:50:24.354101 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354112 | orchestrator | 2025-09-20 09:50:24.354123 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.354158 | orchestrator | Saturday 20 September 2025 09:48:42 +0000 (0:00:00.123) 0:00:04.301 **** 2025-09-20 09:50:24.354170 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354181 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.354192 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.354203 | orchestrator | 2025-09-20 09:50:24.354214 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.354224 | orchestrator | Saturday 20 September 2025 09:48:43 +0000 (0:00:00.453) 0:00:04.755 **** 2025-09-20 09:50:24.354235 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.354246 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.354256 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.354267 | orchestrator | 2025-09-20 09:50:24.354278 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.354288 | orchestrator | Saturday 20 September 2025 09:48:43 +0000 (0:00:00.318) 0:00:05.073 **** 2025-09-20 09:50:24.354299 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354310 | orchestrator | 2025-09-20 09:50:24.354320 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.354339 | orchestrator | Saturday 20 September 2025 09:48:43 +0000 (0:00:00.155) 0:00:05.228 **** 2025-09-20 09:50:24.354349 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354360 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.354371 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.354382 | orchestrator | 2025-09-20 09:50:24.354392 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.354403 | orchestrator | Saturday 20 September 2025 09:48:43 +0000 (0:00:00.290) 0:00:05.519 **** 2025-09-20 09:50:24.354414 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.354425 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.354435 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.354446 | orchestrator | 2025-09-20 09:50:24.354457 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.354468 | orchestrator | Saturday 20 September 2025 09:48:44 +0000 (0:00:00.304) 0:00:05.824 **** 2025-09-20 09:50:24.354478 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354489 | orchestrator | 2025-09-20 09:50:24.354500 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.354510 | orchestrator | Saturday 20 September 2025 09:48:44 +0000 (0:00:00.121) 0:00:05.946 **** 2025-09-20 09:50:24.354521 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354532 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.354542 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.354553 | orchestrator | 2025-09-20 09:50:24.354564 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.354575 | orchestrator | Saturday 20 September 2025 09:48:44 +0000 (0:00:00.531) 0:00:06.477 **** 2025-09-20 09:50:24.354585 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.354596 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.354607 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.354617 | orchestrator | 2025-09-20 09:50:24.354628 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.354639 | orchestrator | Saturday 20 September 2025 09:48:45 +0000 (0:00:00.305) 0:00:06.783 **** 2025-09-20 09:50:24.354650 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354660 | orchestrator | 2025-09-20 09:50:24.354671 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.354682 | orchestrator | Saturday 20 September 2025 09:48:45 +0000 (0:00:00.145) 0:00:06.929 **** 2025-09-20 09:50:24.354692 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354703 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.354714 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.354724 | orchestrator | 2025-09-20 09:50:24.354735 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.354746 | orchestrator | Saturday 20 September 2025 09:48:45 +0000 (0:00:00.292) 0:00:07.221 **** 2025-09-20 09:50:24.354757 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.354768 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.354778 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.354789 | orchestrator | 2025-09-20 09:50:24.354800 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.354810 | orchestrator | Saturday 20 September 2025 09:48:45 +0000 (0:00:00.327) 0:00:07.549 **** 2025-09-20 09:50:24.354821 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354832 | orchestrator | 2025-09-20 09:50:24.354843 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.354853 | orchestrator | Saturday 20 September 2025 09:48:46 +0000 (0:00:00.381) 0:00:07.930 **** 2025-09-20 09:50:24.354864 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.354875 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.354885 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.354896 | orchestrator | 2025-09-20 09:50:24.354912 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.354923 | orchestrator | Saturday 20 September 2025 09:48:46 +0000 (0:00:00.309) 0:00:08.240 **** 2025-09-20 09:50:24.354940 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.354951 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.354962 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.354973 | orchestrator | 2025-09-20 09:50:24.354984 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.354995 | orchestrator | Saturday 20 September 2025 09:48:46 +0000 (0:00:00.331) 0:00:08.572 **** 2025-09-20 09:50:24.355005 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355016 | orchestrator | 2025-09-20 09:50:24.355027 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.355038 | orchestrator | Saturday 20 September 2025 09:48:47 +0000 (0:00:00.127) 0:00:08.699 **** 2025-09-20 09:50:24.355048 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355059 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.355070 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.355080 | orchestrator | 2025-09-20 09:50:24.355091 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.355102 | orchestrator | Saturday 20 September 2025 09:48:47 +0000 (0:00:00.303) 0:00:09.002 **** 2025-09-20 09:50:24.355112 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.355123 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.355182 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.355194 | orchestrator | 2025-09-20 09:50:24.355211 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.355223 | orchestrator | Saturday 20 September 2025 09:48:47 +0000 (0:00:00.509) 0:00:09.512 **** 2025-09-20 09:50:24.355234 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355244 | orchestrator | 2025-09-20 09:50:24.355255 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.355266 | orchestrator | Saturday 20 September 2025 09:48:47 +0000 (0:00:00.134) 0:00:09.647 **** 2025-09-20 09:50:24.355277 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355287 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.355298 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.355309 | orchestrator | 2025-09-20 09:50:24.355320 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.355330 | orchestrator | Saturday 20 September 2025 09:48:48 +0000 (0:00:00.297) 0:00:09.944 **** 2025-09-20 09:50:24.355341 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.355352 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.355362 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.355373 | orchestrator | 2025-09-20 09:50:24.355384 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.355395 | orchestrator | Saturday 20 September 2025 09:48:48 +0000 (0:00:00.311) 0:00:10.255 **** 2025-09-20 09:50:24.355404 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355414 | orchestrator | 2025-09-20 09:50:24.355424 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.355433 | orchestrator | Saturday 20 September 2025 09:48:48 +0000 (0:00:00.130) 0:00:10.385 **** 2025-09-20 09:50:24.355443 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355452 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.355462 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.355471 | orchestrator | 2025-09-20 09:50:24.355481 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.355490 | orchestrator | Saturday 20 September 2025 09:48:49 +0000 (0:00:00.300) 0:00:10.686 **** 2025-09-20 09:50:24.355500 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.355509 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.355519 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.355528 | orchestrator | 2025-09-20 09:50:24.355538 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.355548 | orchestrator | Saturday 20 September 2025 09:48:49 +0000 (0:00:00.543) 0:00:11.230 **** 2025-09-20 09:50:24.355563 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355573 | orchestrator | 2025-09-20 09:50:24.355582 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.355592 | orchestrator | Saturday 20 September 2025 09:48:49 +0000 (0:00:00.140) 0:00:11.370 **** 2025-09-20 09:50:24.355601 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355611 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.355621 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.355630 | orchestrator | 2025-09-20 09:50:24.355639 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-20 09:50:24.355649 | orchestrator | Saturday 20 September 2025 09:48:50 +0000 (0:00:00.317) 0:00:11.688 **** 2025-09-20 09:50:24.355658 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:50:24.355668 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:50:24.355678 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:50:24.355687 | orchestrator | 2025-09-20 09:50:24.355697 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-20 09:50:24.355706 | orchestrator | Saturday 20 September 2025 09:48:50 +0000 (0:00:00.370) 0:00:12.059 **** 2025-09-20 09:50:24.355716 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355725 | orchestrator | 2025-09-20 09:50:24.355735 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-20 09:50:24.355744 | orchestrator | Saturday 20 September 2025 09:48:50 +0000 (0:00:00.127) 0:00:12.186 **** 2025-09-20 09:50:24.355754 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.355763 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.355772 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.355782 | orchestrator | 2025-09-20 09:50:24.355792 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-20 09:50:24.355801 | orchestrator | Saturday 20 September 2025 09:48:50 +0000 (0:00:00.488) 0:00:12.675 **** 2025-09-20 09:50:24.355811 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:50:24.355820 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:50:24.355830 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:50:24.355839 | orchestrator | 2025-09-20 09:50:24.355849 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-20 09:50:24.355863 | orchestrator | Saturday 20 September 2025 09:48:52 +0000 (0:00:01.633) 0:00:14.309 **** 2025-09-20 09:50:24.355873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-20 09:50:24.355883 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-20 09:50:24.355892 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-20 09:50:24.355902 | orchestrator | 2025-09-20 09:50:24.355911 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-20 09:50:24.355921 | orchestrator | Saturday 20 September 2025 09:48:54 +0000 (0:00:01.842) 0:00:16.151 **** 2025-09-20 09:50:24.355930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-20 09:50:24.355940 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-20 09:50:24.355950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-20 09:50:24.355959 | orchestrator | 2025-09-20 09:50:24.355969 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-20 09:50:24.355978 | orchestrator | Saturday 20 September 2025 09:48:56 +0000 (0:00:02.127) 0:00:18.278 **** 2025-09-20 09:50:24.355992 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-20 09:50:24.356002 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-20 09:50:24.356012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-20 09:50:24.356033 | orchestrator | 2025-09-20 09:50:24.356043 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-20 09:50:24.356053 | orchestrator | Saturday 20 September 2025 09:48:58 +0000 (0:00:02.051) 0:00:20.329 **** 2025-09-20 09:50:24.356062 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.356072 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.356081 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.356091 | orchestrator | 2025-09-20 09:50:24.356100 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-20 09:50:24.356110 | orchestrator | Saturday 20 September 2025 09:48:58 +0000 (0:00:00.331) 0:00:20.661 **** 2025-09-20 09:50:24.356119 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.356129 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.356154 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.356163 | orchestrator | 2025-09-20 09:50:24.356173 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 09:50:24.356182 | orchestrator | Saturday 20 September 2025 09:48:59 +0000 (0:00:00.298) 0:00:20.959 **** 2025-09-20 09:50:24.356192 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:50:24.356201 | orchestrator | 2025-09-20 09:50:24.356211 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-20 09:50:24.356220 | orchestrator | Saturday 20 September 2025 09:48:59 +0000 (0:00:00.576) 0:00:21.536 **** 2025-09-20 09:50:24.356237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.356257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.356284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.356295 | orchestrator | 2025-09-20 09:50:24.356305 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-20 09:50:24.356321 | orchestrator | Saturday 20 September 2025 09:49:01 +0000 (0:00:01.846) 0:00:23.382 **** 2025-09-20 09:50:24.356340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:50:24.356352 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.356368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:50:24.356390 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.356401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:50:24.356417 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.356434 | orchestrator | 2025-09-20 09:50:24.356451 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-20 09:50:24.356468 | orchestrator | Saturday 20 September 2025 09:49:02 +0000 (0:00:00.653) 0:00:24.035 **** 2025-09-20 09:50:24.356501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:50:24.356529 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.356547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:50:24.356564 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.356599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-20 09:50:24.356635 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.356656 | orchestrator | 2025-09-20 09:50:24.356673 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-20 09:50:24.356691 | orchestrator | Saturday 20 September 2025 09:49:03 +0000 (0:00:00.830) 0:00:24.865 **** 2025-09-20 09:50:24.356709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.356754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.356773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-20 09:50:24.356790 | orchestrator | 2025-09-20 09:50:24.356800 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 09:50:24.356809 | orchestrator | Saturday 20 September 2025 09:49:04 +0000 (0:00:01.636) 0:00:26.502 **** 2025-09-20 09:50:24.356819 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:50:24.356828 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:50:24.356838 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:50:24.356847 | orchestrator | 2025-09-20 09:50:24.356857 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-20 09:50:24.356866 | orchestrator | Saturday 20 September 2025 09:49:05 +0000 (0:00:00.320) 0:00:26.822 **** 2025-09-20 09:50:24.356876 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:50:24.356885 | orchestrator | 2025-09-20 09:50:24.356894 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-20 09:50:24.356904 | orchestrator | Saturday 20 September 2025 09:49:05 +0000 (0:00:00.532) 0:00:27.354 **** 2025-09-20 09:50:24.356914 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:50:24.356923 | orchestrator | 2025-09-20 09:50:24.356938 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-20 09:50:24.356948 | orchestrator | Saturday 20 September 2025 09:49:07 +0000 (0:00:02.165) 0:00:29.520 **** 2025-09-20 09:50:24.356958 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:50:24.356967 | orchestrator | 2025-09-20 09:50:24.356977 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-20 09:50:24.356986 | orchestrator | Saturday 20 September 2025 09:49:10 +0000 (0:00:02.678) 0:00:32.199 **** 2025-09-20 09:50:24.356996 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:50:24.357005 | orchestrator | 2025-09-20 09:50:24.357015 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-20 09:50:24.357024 | orchestrator | Saturday 20 September 2025 09:49:26 +0000 (0:00:15.556) 0:00:47.756 **** 2025-09-20 09:50:24.357034 | orchestrator | 2025-09-20 09:50:24.357043 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-20 09:50:24.357053 | orchestrator | Saturday 20 September 2025 09:49:26 +0000 (0:00:00.067) 0:00:47.824 **** 2025-09-20 09:50:24.357062 | orchestrator | 2025-09-20 09:50:24.357072 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-20 09:50:24.357081 | orchestrator | Saturday 20 September 2025 09:49:26 +0000 (0:00:00.062) 0:00:47.886 **** 2025-09-20 09:50:24.357090 | orchestrator | 2025-09-20 09:50:24.357100 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-20 09:50:24.357109 | orchestrator | Saturday 20 September 2025 09:49:26 +0000 (0:00:00.091) 0:00:47.978 **** 2025-09-20 09:50:24.357119 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:50:24.357128 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:50:24.357200 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:50:24.357210 | orchestrator | 2025-09-20 09:50:24.357220 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:50:24.357230 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-20 09:50:24.357240 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-20 09:50:24.357250 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-20 09:50:24.357259 | orchestrator | 2025-09-20 09:50:24.357269 | orchestrator | 2025-09-20 09:50:24.357278 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:50:24.357299 | orchestrator | Saturday 20 September 2025 09:50:21 +0000 (0:00:55.548) 0:01:43.527 **** 2025-09-20 09:50:24.357308 | orchestrator | =============================================================================== 2025-09-20 09:50:24.357318 | orchestrator | horizon : Restart horizon container ------------------------------------ 55.55s 2025-09-20 09:50:24.357327 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.56s 2025-09-20 09:50:24.357337 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.68s 2025-09-20 09:50:24.357346 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.17s 2025-09-20 09:50:24.357356 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.13s 2025-09-20 09:50:24.357365 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.05s 2025-09-20 09:50:24.357375 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.85s 2025-09-20 09:50:24.357384 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2025-09-20 09:50:24.357394 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.64s 2025-09-20 09:50:24.357403 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.63s 2025-09-20 09:50:24.357412 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.11s 2025-09-20 09:50:24.357422 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2025-09-20 09:50:24.357431 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2025-09-20 09:50:24.357440 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2025-09-20 09:50:24.357452 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-09-20 09:50:24.357460 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-09-20 09:50:24.357468 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-20 09:50:24.357475 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-20 09:50:24.357483 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2025-09-20 09:50:24.357491 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-09-20 09:50:24.357498 | orchestrator | 2025-09-20 09:50:24 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:24.357506 | orchestrator | 2025-09-20 09:50:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:27.397722 | orchestrator | 2025-09-20 09:50:27 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:27.398785 | orchestrator | 2025-09-20 09:50:27 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:27.398814 | orchestrator | 2025-09-20 09:50:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:30.445579 | orchestrator | 2025-09-20 09:50:30 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:30.446273 | orchestrator | 2025-09-20 09:50:30 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:30.448108 | orchestrator | 2025-09-20 09:50:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:33.492830 | orchestrator | 2025-09-20 09:50:33 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:33.496997 | orchestrator | 2025-09-20 09:50:33 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:33.498226 | orchestrator | 2025-09-20 09:50:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:36.541759 | orchestrator | 2025-09-20 09:50:36 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:36.543678 | orchestrator | 2025-09-20 09:50:36 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:36.543712 | orchestrator | 2025-09-20 09:50:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:39.587090 | orchestrator | 2025-09-20 09:50:39 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:39.588911 | orchestrator | 2025-09-20 09:50:39 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:39.588947 | orchestrator | 2025-09-20 09:50:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:42.629176 | orchestrator | 2025-09-20 09:50:42 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:42.630786 | orchestrator | 2025-09-20 09:50:42 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:42.630848 | orchestrator | 2025-09-20 09:50:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:45.672081 | orchestrator | 2025-09-20 09:50:45 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:45.676035 | orchestrator | 2025-09-20 09:50:45 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:45.676067 | orchestrator | 2025-09-20 09:50:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:48.717410 | orchestrator | 2025-09-20 09:50:48 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:48.718999 | orchestrator | 2025-09-20 09:50:48 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:48.719028 | orchestrator | 2025-09-20 09:50:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:51.754087 | orchestrator | 2025-09-20 09:50:51 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:51.755678 | orchestrator | 2025-09-20 09:50:51 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:51.755710 | orchestrator | 2025-09-20 09:50:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:54.799801 | orchestrator | 2025-09-20 09:50:54 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:54.801089 | orchestrator | 2025-09-20 09:50:54 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:54.801119 | orchestrator | 2025-09-20 09:50:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:50:57.844681 | orchestrator | 2025-09-20 09:50:57 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:50:57.846660 | orchestrator | 2025-09-20 09:50:57 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:50:57.847229 | orchestrator | 2025-09-20 09:50:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:00.890731 | orchestrator | 2025-09-20 09:51:00 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:51:00.893420 | orchestrator | 2025-09-20 09:51:00 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:51:00.893478 | orchestrator | 2025-09-20 09:51:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:03.944821 | orchestrator | 2025-09-20 09:51:03 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:51:03.947732 | orchestrator | 2025-09-20 09:51:03 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:51:03.947793 | orchestrator | 2025-09-20 09:51:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:06.999921 | orchestrator | 2025-09-20 09:51:06 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:51:07.000548 | orchestrator | 2025-09-20 09:51:06 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state STARTED 2025-09-20 09:51:07.000578 | orchestrator | 2025-09-20 09:51:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:10.037343 | orchestrator | 2025-09-20 09:51:10 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:51:10.037422 | orchestrator | 2025-09-20 09:51:10 | INFO  | Task 86202307-6f37-4cbb-900a-a06dcffeb565 is in state STARTED 2025-09-20 09:51:10.038930 | orchestrator | 2025-09-20 09:51:10 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:10.038949 | orchestrator | 2025-09-20 09:51:10 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:10.041668 | orchestrator | 2025-09-20 09:51:10 | INFO  | Task 10f2cda2-8eaa-4439-9bdb-a466f4535da6 is in state SUCCESS 2025-09-20 09:51:10.041686 | orchestrator | 2025-09-20 09:51:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:13.073103 | orchestrator | 2025-09-20 09:51:13 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:51:13.076739 | orchestrator | 2025-09-20 09:51:13 | INFO  | Task 86202307-6f37-4cbb-900a-a06dcffeb565 is in state STARTED 2025-09-20 09:51:13.080480 | orchestrator | 2025-09-20 09:51:13 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:13.081191 | orchestrator | 2025-09-20 09:51:13 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:13.081370 | orchestrator | 2025-09-20 09:51:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:16.115362 | orchestrator | 2025-09-20 09:51:16 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:16.117309 | orchestrator | 2025-09-20 09:51:16 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:16.117331 | orchestrator | 2025-09-20 09:51:16 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state STARTED 2025-09-20 09:51:16.117869 | orchestrator | 2025-09-20 09:51:16 | INFO  | Task 86202307-6f37-4cbb-900a-a06dcffeb565 is in state SUCCESS 2025-09-20 09:51:16.118790 | orchestrator | 2025-09-20 09:51:16 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:16.119609 | orchestrator | 2025-09-20 09:51:16 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:16.119686 | orchestrator | 2025-09-20 09:51:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:19.162327 | orchestrator | 2025-09-20 09:51:19 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:19.162850 | orchestrator | 2025-09-20 09:51:19 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:19.166218 | orchestrator | 2025-09-20 09:51:19 | INFO  | Task 975eadac-f85c-4900-bb8d-2a262ffc959c is in state SUCCESS 2025-09-20 09:51:19.167372 | orchestrator | 2025-09-20 09:51:19.167429 | orchestrator | 2025-09-20 09:51:19.167442 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-20 09:51:19.167453 | orchestrator | 2025-09-20 09:51:19.167865 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-20 09:51:19.167879 | orchestrator | Saturday 20 September 2025 09:50:14 +0000 (0:00:00.240) 0:00:00.240 **** 2025-09-20 09:51:19.167891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-20 09:51:19.167930 | orchestrator | 2025-09-20 09:51:19.167956 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-20 09:51:19.167967 | orchestrator | Saturday 20 September 2025 09:50:15 +0000 (0:00:00.256) 0:00:00.497 **** 2025-09-20 09:51:19.167979 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-20 09:51:19.167990 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-20 09:51:19.168001 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-20 09:51:19.168012 | orchestrator | 2025-09-20 09:51:19.168023 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-20 09:51:19.168034 | orchestrator | Saturday 20 September 2025 09:50:16 +0000 (0:00:01.364) 0:00:01.861 **** 2025-09-20 09:51:19.168045 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-20 09:51:19.168056 | orchestrator | 2025-09-20 09:51:19.168066 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-20 09:51:19.168077 | orchestrator | Saturday 20 September 2025 09:50:17 +0000 (0:00:01.146) 0:00:03.008 **** 2025-09-20 09:51:19.168088 | orchestrator | changed: [testbed-manager] 2025-09-20 09:51:19.168099 | orchestrator | 2025-09-20 09:51:19.168110 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-20 09:51:19.168121 | orchestrator | Saturday 20 September 2025 09:50:18 +0000 (0:00:01.011) 0:00:04.019 **** 2025-09-20 09:51:19.168132 | orchestrator | changed: [testbed-manager] 2025-09-20 09:51:19.168143 | orchestrator | 2025-09-20 09:51:19.168184 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-20 09:51:19.168195 | orchestrator | Saturday 20 September 2025 09:50:19 +0000 (0:00:00.821) 0:00:04.841 **** 2025-09-20 09:51:19.168206 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-20 09:51:19.168217 | orchestrator | ok: [testbed-manager] 2025-09-20 09:51:19.168228 | orchestrator | 2025-09-20 09:51:19.168239 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-20 09:51:19.168250 | orchestrator | Saturday 20 September 2025 09:50:57 +0000 (0:00:37.899) 0:00:42.740 **** 2025-09-20 09:51:19.168261 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-20 09:51:19.168272 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-20 09:51:19.168283 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-20 09:51:19.168294 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-20 09:51:19.168304 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-20 09:51:19.168315 | orchestrator | 2025-09-20 09:51:19.168326 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-20 09:51:19.168337 | orchestrator | Saturday 20 September 2025 09:51:01 +0000 (0:00:04.088) 0:00:46.829 **** 2025-09-20 09:51:19.168347 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-20 09:51:19.168358 | orchestrator | 2025-09-20 09:51:19.168369 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-20 09:51:19.168379 | orchestrator | Saturday 20 September 2025 09:51:01 +0000 (0:00:00.493) 0:00:47.322 **** 2025-09-20 09:51:19.168390 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:51:19.168401 | orchestrator | 2025-09-20 09:51:19.168412 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-20 09:51:19.168423 | orchestrator | Saturday 20 September 2025 09:51:02 +0000 (0:00:00.154) 0:00:47.476 **** 2025-09-20 09:51:19.168433 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:51:19.168444 | orchestrator | 2025-09-20 09:51:19.168455 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-20 09:51:19.168465 | orchestrator | Saturday 20 September 2025 09:51:02 +0000 (0:00:00.306) 0:00:47.783 **** 2025-09-20 09:51:19.168476 | orchestrator | changed: [testbed-manager] 2025-09-20 09:51:19.168487 | orchestrator | 2025-09-20 09:51:19.168500 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-20 09:51:19.168522 | orchestrator | Saturday 20 September 2025 09:51:04 +0000 (0:00:02.027) 0:00:49.810 **** 2025-09-20 09:51:19.168534 | orchestrator | changed: [testbed-manager] 2025-09-20 09:51:19.168547 | orchestrator | 2025-09-20 09:51:19.168560 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-20 09:51:19.168572 | orchestrator | Saturday 20 September 2025 09:51:05 +0000 (0:00:00.783) 0:00:50.594 **** 2025-09-20 09:51:19.168584 | orchestrator | changed: [testbed-manager] 2025-09-20 09:51:19.168597 | orchestrator | 2025-09-20 09:51:19.168609 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-20 09:51:19.168622 | orchestrator | Saturday 20 September 2025 09:51:05 +0000 (0:00:00.641) 0:00:51.236 **** 2025-09-20 09:51:19.168635 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-20 09:51:19.168647 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-20 09:51:19.168659 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-20 09:51:19.168671 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-20 09:51:19.168684 | orchestrator | 2025-09-20 09:51:19.168696 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:51:19.168709 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 09:51:19.168722 | orchestrator | 2025-09-20 09:51:19.168733 | orchestrator | 2025-09-20 09:51:19.168780 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:51:19.168793 | orchestrator | Saturday 20 September 2025 09:51:07 +0000 (0:00:01.490) 0:00:52.726 **** 2025-09-20 09:51:19.168804 | orchestrator | =============================================================================== 2025-09-20 09:51:19.168814 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.90s 2025-09-20 09:51:19.168825 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.09s 2025-09-20 09:51:19.168842 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.03s 2025-09-20 09:51:19.168854 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2025-09-20 09:51:19.168865 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2025-09-20 09:51:19.168876 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-09-20 09:51:19.168886 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.01s 2025-09-20 09:51:19.168897 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2025-09-20 09:51:19.168908 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.78s 2025-09-20 09:51:19.168919 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-09-20 09:51:19.168929 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.49s 2025-09-20 09:51:19.168940 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-09-20 09:51:19.168951 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-09-20 09:51:19.168962 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-09-20 09:51:19.168973 | orchestrator | 2025-09-20 09:51:19.168983 | orchestrator | 2025-09-20 09:51:19.168994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:51:19.169005 | orchestrator | 2025-09-20 09:51:19.169016 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:51:19.169027 | orchestrator | Saturday 20 September 2025 09:51:11 +0000 (0:00:00.182) 0:00:00.182 **** 2025-09-20 09:51:19.169038 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.169048 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.169059 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.169070 | orchestrator | 2025-09-20 09:51:19.169081 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:51:19.169125 | orchestrator | Saturday 20 September 2025 09:51:12 +0000 (0:00:00.340) 0:00:00.522 **** 2025-09-20 09:51:19.169138 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-20 09:51:19.169149 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-20 09:51:19.169191 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-20 09:51:19.169202 | orchestrator | 2025-09-20 09:51:19.169213 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-20 09:51:19.169223 | orchestrator | 2025-09-20 09:51:19.169234 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-20 09:51:19.169245 | orchestrator | Saturday 20 September 2025 09:51:13 +0000 (0:00:00.892) 0:00:01.414 **** 2025-09-20 09:51:19.169255 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.169266 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.169277 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.169287 | orchestrator | 2025-09-20 09:51:19.169298 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:51:19.169310 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:51:19.169321 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:51:19.169332 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:51:19.169343 | orchestrator | 2025-09-20 09:51:19.169354 | orchestrator | 2025-09-20 09:51:19.169364 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:51:19.169375 | orchestrator | Saturday 20 September 2025 09:51:13 +0000 (0:00:00.758) 0:00:02.172 **** 2025-09-20 09:51:19.169386 | orchestrator | =============================================================================== 2025-09-20 09:51:19.169396 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2025-09-20 09:51:19.169407 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.76s 2025-09-20 09:51:19.169417 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-20 09:51:19.169428 | orchestrator | 2025-09-20 09:51:19.169439 | orchestrator | 2025-09-20 09:51:19.169449 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:51:19.169460 | orchestrator | 2025-09-20 09:51:19.169470 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:51:19.169481 | orchestrator | Saturday 20 September 2025 09:48:38 +0000 (0:00:00.290) 0:00:00.290 **** 2025-09-20 09:51:19.169491 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.169502 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.169513 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.169524 | orchestrator | 2025-09-20 09:51:19.169534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:51:19.169545 | orchestrator | Saturday 20 September 2025 09:48:38 +0000 (0:00:00.327) 0:00:00.617 **** 2025-09-20 09:51:19.169556 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-20 09:51:19.169566 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-20 09:51:19.169577 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-20 09:51:19.169588 | orchestrator | 2025-09-20 09:51:19.169599 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-20 09:51:19.169610 | orchestrator | 2025-09-20 09:51:19.169649 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 09:51:19.169661 | orchestrator | Saturday 20 September 2025 09:48:39 +0000 (0:00:00.514) 0:00:01.132 **** 2025-09-20 09:51:19.169672 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:51:19.169683 | orchestrator | 2025-09-20 09:51:19.169694 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-20 09:51:19.169718 | orchestrator | Saturday 20 September 2025 09:48:40 +0000 (0:00:00.569) 0:00:01.701 **** 2025-09-20 09:51:19.169751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.169769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.169782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.169795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.169836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.169857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.169869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.169882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.169893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.169904 | orchestrator | 2025-09-20 09:51:19.169955 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-20 09:51:19.169977 | orchestrator | Saturday 20 September 2025 09:48:41 +0000 (0:00:01.772) 0:00:03.473 **** 2025-09-20 09:51:19.169996 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-20 09:51:19.170090 | orchestrator | 2025-09-20 09:51:19.170114 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-20 09:51:19.170132 | orchestrator | Saturday 20 September 2025 09:48:42 +0000 (0:00:00.825) 0:00:04.299 **** 2025-09-20 09:51:19.170148 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.170228 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.170246 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.170264 | orchestrator | 2025-09-20 09:51:19.170280 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-20 09:51:19.170296 | orchestrator | Saturday 20 September 2025 09:48:43 +0000 (0:00:00.494) 0:00:04.793 **** 2025-09-20 09:51:19.170324 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:51:19.170339 | orchestrator | 2025-09-20 09:51:19.170355 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 09:51:19.170369 | orchestrator | Saturday 20 September 2025 09:48:43 +0000 (0:00:00.667) 0:00:05.461 **** 2025-09-20 09:51:19.170383 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:51:19.170398 | orchestrator | 2025-09-20 09:51:19.170424 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-20 09:51:19.170441 | orchestrator | Saturday 20 September 2025 09:48:44 +0000 (0:00:00.536) 0:00:05.998 **** 2025-09-20 09:51:19.170467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.170487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.170504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.170523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.170561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.170594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.170611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.170628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.170645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.170662 | orchestrator | 2025-09-20 09:51:19.170679 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-20 09:51:19.170696 | orchestrator | Saturday 20 September 2025 09:48:47 +0000 (0:00:03.115) 0:00:09.114 **** 2025-09-20 09:51:19.170714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:51:19.170757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.170776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:51:19.170792 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.170809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:51:19.170827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.170842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:51:19.170869 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.170897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:51:19.170922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.170939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:51:19.170956 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.170972 | orchestrator | 2025-09-20 09:51:19.170988 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-20 09:51:19.171004 | orchestrator | Saturday 20 September 2025 09:48:48 +0000 (0:00:00.772) 0:00:09.887 **** 2025-09-20 09:51:19.171022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:51:19.171043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.171054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:51:19.171064 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.171088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:51:19.171099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.171109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:51:19.171119 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.171129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-20 09:51:19.171146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.171197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-20 09:51:19.171209 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.171219 | orchestrator | 2025-09-20 09:51:19.171233 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-20 09:51:19.171243 | orchestrator | Saturday 20 September 2025 09:48:48 +0000 (0:00:00.740) 0:00:10.627 **** 2025-09-20 09:51:19.171253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171457 | orchestrator | 2025-09-20 09:51:19.171467 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-20 09:51:19.171477 | orchestrator | Saturday 20 September 2025 09:48:52 +0000 (0:00:03.219) 0:00:13.847 **** 2025-09-20 09:51:19.171496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.171523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.171550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.171582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.171622 | orchestrator | 2025-09-20 09:51:19.171633 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-20 09:51:19.171642 | orchestrator | Saturday 20 September 2025 09:48:57 +0000 (0:00:05.362) 0:00:19.209 **** 2025-09-20 09:51:19.171652 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.171661 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:51:19.171671 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:51:19.171680 | orchestrator | 2025-09-20 09:51:19.171690 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-20 09:51:19.171699 | orchestrator | Saturday 20 September 2025 09:48:59 +0000 (0:00:01.465) 0:00:20.674 **** 2025-09-20 09:51:19.171709 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.171718 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.171728 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.171737 | orchestrator | 2025-09-20 09:51:19.171746 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-20 09:51:19.171756 | orchestrator | Saturday 20 September 2025 09:48:59 +0000 (0:00:00.545) 0:00:21.220 **** 2025-09-20 09:51:19.171765 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.171775 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.171784 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.171799 | orchestrator | 2025-09-20 09:51:19.171816 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-20 09:51:19.171833 | orchestrator | Saturday 20 September 2025 09:48:59 +0000 (0:00:00.290) 0:00:21.510 **** 2025-09-20 09:51:19.171849 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.171865 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.171881 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.171899 | orchestrator | 2025-09-20 09:51:19.171915 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-20 09:51:19.171931 | orchestrator | Saturday 20 September 2025 09:49:00 +0000 (0:00:00.523) 0:00:22.034 **** 2025-09-20 09:51:19.171949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.171983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.172002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.172030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.172048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.172067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-20 09:51:19.172093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.172118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.172147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.172189 | orchestrator | 2025-09-20 09:51:19.172206 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 09:51:19.172222 | orchestrator | Saturday 20 September 2025 09:49:02 +0000 (0:00:02.430) 0:00:24.465 **** 2025-09-20 09:51:19.172238 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.172254 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.172272 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.172288 | orchestrator | 2025-09-20 09:51:19.172305 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-20 09:51:19.172322 | orchestrator | Saturday 20 September 2025 09:49:03 +0000 (0:00:00.312) 0:00:24.777 **** 2025-09-20 09:51:19.172334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-20 09:51:19.172344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-20 09:51:19.172354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-20 09:51:19.172363 | orchestrator | 2025-09-20 09:51:19.172377 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-20 09:51:19.172393 | orchestrator | Saturday 20 September 2025 09:49:05 +0000 (0:00:01.917) 0:00:26.694 **** 2025-09-20 09:51:19.172409 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:51:19.172426 | orchestrator | 2025-09-20 09:51:19.172443 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-20 09:51:19.172458 | orchestrator | Saturday 20 September 2025 09:49:06 +0000 (0:00:00.944) 0:00:27.639 **** 2025-09-20 09:51:19.172475 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.172493 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.172505 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.172514 | orchestrator | 2025-09-20 09:51:19.172524 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-20 09:51:19.172533 | orchestrator | Saturday 20 September 2025 09:49:06 +0000 (0:00:00.887) 0:00:28.526 **** 2025-09-20 09:51:19.172543 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:51:19.172552 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 09:51:19.172561 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 09:51:19.172571 | orchestrator | 2025-09-20 09:51:19.172580 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-20 09:51:19.172590 | orchestrator | Saturday 20 September 2025 09:49:08 +0000 (0:00:01.201) 0:00:29.728 **** 2025-09-20 09:51:19.172600 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.172610 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.172619 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.172628 | orchestrator | 2025-09-20 09:51:19.172638 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-20 09:51:19.172647 | orchestrator | Saturday 20 September 2025 09:49:08 +0000 (0:00:00.342) 0:00:30.070 **** 2025-09-20 09:51:19.172657 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-20 09:51:19.172675 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-20 09:51:19.172684 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-20 09:51:19.172694 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-20 09:51:19.172703 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-20 09:51:19.172720 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-20 09:51:19.172730 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-20 09:51:19.172740 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-20 09:51:19.172749 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-20 09:51:19.172764 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-20 09:51:19.172774 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-20 09:51:19.172783 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-20 09:51:19.172793 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-20 09:51:19.172802 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-20 09:51:19.172812 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-20 09:51:19.172821 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 09:51:19.172831 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 09:51:19.172840 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 09:51:19.172850 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 09:51:19.172859 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 09:51:19.172869 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 09:51:19.172878 | orchestrator | 2025-09-20 09:51:19.172888 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-20 09:51:19.172897 | orchestrator | Saturday 20 September 2025 09:49:17 +0000 (0:00:09.118) 0:00:39.189 **** 2025-09-20 09:51:19.172907 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 09:51:19.172916 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 09:51:19.172926 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 09:51:19.172935 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 09:51:19.172945 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 09:51:19.172954 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 09:51:19.172964 | orchestrator | 2025-09-20 09:51:19.172973 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-20 09:51:19.172983 | orchestrator | Saturday 20 September 2025 09:49:20 +0000 (0:00:03.225) 0:00:42.415 **** 2025-09-20 09:51:19.172993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.173018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.173035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-20 09:51:19.173046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.173056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.173072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-20 09:51:19.173082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.173099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.173134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-20 09:51:19.173146 | orchestrator | 2025-09-20 09:51:19.173209 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 09:51:19.173219 | orchestrator | Saturday 20 September 2025 09:49:23 +0000 (0:00:02.341) 0:00:44.757 **** 2025-09-20 09:51:19.173229 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.173239 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.173249 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.173258 | orchestrator | 2025-09-20 09:51:19.173268 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-20 09:51:19.173281 | orchestrator | Saturday 20 September 2025 09:49:23 +0000 (0:00:00.341) 0:00:45.098 **** 2025-09-20 09:51:19.173298 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173314 | orchestrator | 2025-09-20 09:51:19.173331 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-20 09:51:19.173347 | orchestrator | Saturday 20 September 2025 09:49:25 +0000 (0:00:02.195) 0:00:47.294 **** 2025-09-20 09:51:19.173363 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173378 | orchestrator | 2025-09-20 09:51:19.173394 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-20 09:51:19.173410 | orchestrator | Saturday 20 September 2025 09:49:27 +0000 (0:00:02.049) 0:00:49.344 **** 2025-09-20 09:51:19.173426 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.173442 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.173457 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.173485 | orchestrator | 2025-09-20 09:51:19.173503 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-20 09:51:19.173520 | orchestrator | Saturday 20 September 2025 09:49:28 +0000 (0:00:00.926) 0:00:50.271 **** 2025-09-20 09:51:19.173536 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.173549 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.173559 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.173568 | orchestrator | 2025-09-20 09:51:19.173578 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-20 09:51:19.173587 | orchestrator | Saturday 20 September 2025 09:49:29 +0000 (0:00:00.563) 0:00:50.834 **** 2025-09-20 09:51:19.173596 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.173606 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.173615 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.173624 | orchestrator | 2025-09-20 09:51:19.173634 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-20 09:51:19.173643 | orchestrator | Saturday 20 September 2025 09:49:29 +0000 (0:00:00.332) 0:00:51.167 **** 2025-09-20 09:51:19.173653 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173662 | orchestrator | 2025-09-20 09:51:19.173670 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-20 09:51:19.173677 | orchestrator | Saturday 20 September 2025 09:49:43 +0000 (0:00:13.945) 0:01:05.112 **** 2025-09-20 09:51:19.173685 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173693 | orchestrator | 2025-09-20 09:51:19.173700 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-20 09:51:19.173708 | orchestrator | Saturday 20 September 2025 09:49:53 +0000 (0:00:10.104) 0:01:15.217 **** 2025-09-20 09:51:19.173716 | orchestrator | 2025-09-20 09:51:19.173723 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-20 09:51:19.173731 | orchestrator | Saturday 20 September 2025 09:49:53 +0000 (0:00:00.083) 0:01:15.301 **** 2025-09-20 09:51:19.173738 | orchestrator | 2025-09-20 09:51:19.173746 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-20 09:51:19.173754 | orchestrator | Saturday 20 September 2025 09:49:53 +0000 (0:00:00.067) 0:01:15.369 **** 2025-09-20 09:51:19.173761 | orchestrator | 2025-09-20 09:51:19.173769 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-20 09:51:19.173777 | orchestrator | Saturday 20 September 2025 09:49:53 +0000 (0:00:00.084) 0:01:15.453 **** 2025-09-20 09:51:19.173784 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173792 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:51:19.173799 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:51:19.173807 | orchestrator | 2025-09-20 09:51:19.173815 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-20 09:51:19.173822 | orchestrator | Saturday 20 September 2025 09:50:13 +0000 (0:00:19.461) 0:01:34.914 **** 2025-09-20 09:51:19.173830 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173838 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:51:19.173845 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:51:19.173853 | orchestrator | 2025-09-20 09:51:19.173860 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-20 09:51:19.173868 | orchestrator | Saturday 20 September 2025 09:50:18 +0000 (0:00:04.811) 0:01:39.725 **** 2025-09-20 09:51:19.173876 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.173883 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:51:19.173898 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:51:19.173906 | orchestrator | 2025-09-20 09:51:19.173914 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 09:51:19.173922 | orchestrator | Saturday 20 September 2025 09:50:29 +0000 (0:00:11.772) 0:01:51.497 **** 2025-09-20 09:51:19.173929 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:51:19.173937 | orchestrator | 2025-09-20 09:51:19.173950 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-20 09:51:19.173963 | orchestrator | Saturday 20 September 2025 09:50:30 +0000 (0:00:00.736) 0:01:52.234 **** 2025-09-20 09:51:19.173971 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:51:19.173979 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.173986 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:51:19.173994 | orchestrator | 2025-09-20 09:51:19.174002 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-20 09:51:19.174009 | orchestrator | Saturday 20 September 2025 09:50:31 +0000 (0:00:00.794) 0:01:53.028 **** 2025-09-20 09:51:19.174048 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:51:19.174057 | orchestrator | 2025-09-20 09:51:19.174064 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-20 09:51:19.174072 | orchestrator | Saturday 20 September 2025 09:50:33 +0000 (0:00:01.791) 0:01:54.819 **** 2025-09-20 09:51:19.174080 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-20 09:51:19.174088 | orchestrator | 2025-09-20 09:51:19.174096 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-20 09:51:19.174104 | orchestrator | Saturday 20 September 2025 09:50:43 +0000 (0:00:10.612) 0:02:05.432 **** 2025-09-20 09:51:19.174111 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-20 09:51:19.174119 | orchestrator | 2025-09-20 09:51:19.174127 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-20 09:51:19.174135 | orchestrator | Saturday 20 September 2025 09:51:05 +0000 (0:00:21.770) 0:02:27.203 **** 2025-09-20 09:51:19.174142 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-20 09:51:19.174169 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-20 09:51:19.174178 | orchestrator | 2025-09-20 09:51:19.174186 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-20 09:51:19.174194 | orchestrator | Saturday 20 September 2025 09:51:12 +0000 (0:00:06.707) 0:02:33.910 **** 2025-09-20 09:51:19.174202 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.174209 | orchestrator | 2025-09-20 09:51:19.174217 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-20 09:51:19.174225 | orchestrator | Saturday 20 September 2025 09:51:12 +0000 (0:00:00.139) 0:02:34.050 **** 2025-09-20 09:51:19.174233 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.174240 | orchestrator | 2025-09-20 09:51:19.174248 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-20 09:51:19.174256 | orchestrator | Saturday 20 September 2025 09:51:12 +0000 (0:00:00.243) 0:02:34.293 **** 2025-09-20 09:51:19.174264 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.174271 | orchestrator | 2025-09-20 09:51:19.174279 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-20 09:51:19.174287 | orchestrator | Saturday 20 September 2025 09:51:12 +0000 (0:00:00.124) 0:02:34.418 **** 2025-09-20 09:51:19.174294 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.174302 | orchestrator | 2025-09-20 09:51:19.174310 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-20 09:51:19.174318 | orchestrator | Saturday 20 September 2025 09:51:13 +0000 (0:00:00.653) 0:02:35.071 **** 2025-09-20 09:51:19.174325 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:51:19.174333 | orchestrator | 2025-09-20 09:51:19.174341 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-20 09:51:19.174349 | orchestrator | Saturday 20 September 2025 09:51:16 +0000 (0:00:03.292) 0:02:38.364 **** 2025-09-20 09:51:19.174356 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:51:19.174364 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:51:19.174372 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:51:19.174380 | orchestrator | 2025-09-20 09:51:19.174388 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:51:19.174396 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-20 09:51:19.174411 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-20 09:51:19.174419 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-20 09:51:19.174427 | orchestrator | 2025-09-20 09:51:19.174434 | orchestrator | 2025-09-20 09:51:19.174442 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:51:19.174450 | orchestrator | Saturday 20 September 2025 09:51:18 +0000 (0:00:01.368) 0:02:39.732 **** 2025-09-20 09:51:19.174458 | orchestrator | =============================================================================== 2025-09-20 09:51:19.174465 | orchestrator | service-ks-register : keystone | Creating services --------------------- 21.77s 2025-09-20 09:51:19.174473 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.46s 2025-09-20 09:51:19.174481 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.95s 2025-09-20 09:51:19.174488 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.77s 2025-09-20 09:51:19.174496 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.61s 2025-09-20 09:51:19.174509 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.10s 2025-09-20 09:51:19.174517 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.12s 2025-09-20 09:51:19.174525 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.71s 2025-09-20 09:51:19.174533 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.36s 2025-09-20 09:51:19.174541 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.81s 2025-09-20 09:51:19.174556 | orchestrator | keystone : Creating default user role ----------------------------------- 3.29s 2025-09-20 09:51:19.174563 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.23s 2025-09-20 09:51:19.174571 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.22s 2025-09-20 09:51:19.174579 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.12s 2025-09-20 09:51:19.174587 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.43s 2025-09-20 09:51:19.174595 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.34s 2025-09-20 09:51:19.174602 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.20s 2025-09-20 09:51:19.174610 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.05s 2025-09-20 09:51:19.174618 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.92s 2025-09-20 09:51:19.174625 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.79s 2025-09-20 09:51:19.176206 | orchestrator | 2025-09-20 09:51:19 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:19.177831 | orchestrator | 2025-09-20 09:51:19 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:19.177856 | orchestrator | 2025-09-20 09:51:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:22.200929 | orchestrator | 2025-09-20 09:51:22 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:22.201024 | orchestrator | 2025-09-20 09:51:22 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:22.201678 | orchestrator | 2025-09-20 09:51:22 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:22.201702 | orchestrator | 2025-09-20 09:51:22 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:22.202103 | orchestrator | 2025-09-20 09:51:22 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:22.202123 | orchestrator | 2025-09-20 09:51:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:25.225378 | orchestrator | 2025-09-20 09:51:25 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:25.227033 | orchestrator | 2025-09-20 09:51:25 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:25.227067 | orchestrator | 2025-09-20 09:51:25 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:25.228531 | orchestrator | 2025-09-20 09:51:25 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:25.231432 | orchestrator | 2025-09-20 09:51:25 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:25.232091 | orchestrator | 2025-09-20 09:51:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:28.268522 | orchestrator | 2025-09-20 09:51:28 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:28.269760 | orchestrator | 2025-09-20 09:51:28 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:28.270587 | orchestrator | 2025-09-20 09:51:28 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:28.271590 | orchestrator | 2025-09-20 09:51:28 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:28.272575 | orchestrator | 2025-09-20 09:51:28 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:28.272598 | orchestrator | 2025-09-20 09:51:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:31.320113 | orchestrator | 2025-09-20 09:51:31 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:31.321226 | orchestrator | 2025-09-20 09:51:31 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:31.322978 | orchestrator | 2025-09-20 09:51:31 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:31.324085 | orchestrator | 2025-09-20 09:51:31 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:31.325997 | orchestrator | 2025-09-20 09:51:31 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:31.326066 | orchestrator | 2025-09-20 09:51:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:34.381324 | orchestrator | 2025-09-20 09:51:34 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:34.381446 | orchestrator | 2025-09-20 09:51:34 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:34.381462 | orchestrator | 2025-09-20 09:51:34 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:34.381474 | orchestrator | 2025-09-20 09:51:34 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:34.381486 | orchestrator | 2025-09-20 09:51:34 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:34.381497 | orchestrator | 2025-09-20 09:51:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:37.416524 | orchestrator | 2025-09-20 09:51:37 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:37.417708 | orchestrator | 2025-09-20 09:51:37 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:37.418996 | orchestrator | 2025-09-20 09:51:37 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:37.421849 | orchestrator | 2025-09-20 09:51:37 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:37.423662 | orchestrator | 2025-09-20 09:51:37 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:37.423815 | orchestrator | 2025-09-20 09:51:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:40.709529 | orchestrator | 2025-09-20 09:51:40 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:40.709628 | orchestrator | 2025-09-20 09:51:40 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:40.709643 | orchestrator | 2025-09-20 09:51:40 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:40.709654 | orchestrator | 2025-09-20 09:51:40 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:40.709665 | orchestrator | 2025-09-20 09:51:40 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:40.709677 | orchestrator | 2025-09-20 09:51:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:43.511038 | orchestrator | 2025-09-20 09:51:43 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:43.511149 | orchestrator | 2025-09-20 09:51:43 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:43.511590 | orchestrator | 2025-09-20 09:51:43 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:43.512154 | orchestrator | 2025-09-20 09:51:43 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:43.513234 | orchestrator | 2025-09-20 09:51:43 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:43.513259 | orchestrator | 2025-09-20 09:51:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:46.534618 | orchestrator | 2025-09-20 09:51:46 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:46.534716 | orchestrator | 2025-09-20 09:51:46 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:46.535235 | orchestrator | 2025-09-20 09:51:46 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:46.536426 | orchestrator | 2025-09-20 09:51:46 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:46.536964 | orchestrator | 2025-09-20 09:51:46 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:46.536987 | orchestrator | 2025-09-20 09:51:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:49.564467 | orchestrator | 2025-09-20 09:51:49 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:49.564564 | orchestrator | 2025-09-20 09:51:49 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:49.564578 | orchestrator | 2025-09-20 09:51:49 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:49.564589 | orchestrator | 2025-09-20 09:51:49 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:49.564600 | orchestrator | 2025-09-20 09:51:49 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:49.564611 | orchestrator | 2025-09-20 09:51:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:52.588706 | orchestrator | 2025-09-20 09:51:52 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:52.590302 | orchestrator | 2025-09-20 09:51:52 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state STARTED 2025-09-20 09:51:52.593510 | orchestrator | 2025-09-20 09:51:52 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:52.598375 | orchestrator | 2025-09-20 09:51:52 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:52.601337 | orchestrator | 2025-09-20 09:51:52 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:52.601503 | orchestrator | 2025-09-20 09:51:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:55.631788 | orchestrator | 2025-09-20 09:51:55 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:55.631881 | orchestrator | 2025-09-20 09:51:55 | INFO  | Task af594f58-e1a9-42c9-bd7f-2bc7fb3f9693 is in state SUCCESS 2025-09-20 09:51:55.632251 | orchestrator | 2025-09-20 09:51:55 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:51:55.632662 | orchestrator | 2025-09-20 09:51:55 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:55.633186 | orchestrator | 2025-09-20 09:51:55 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:55.633575 | orchestrator | 2025-09-20 09:51:55 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:55.633596 | orchestrator | 2025-09-20 09:51:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:51:58.657109 | orchestrator | 2025-09-20 09:51:58 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:51:58.657526 | orchestrator | 2025-09-20 09:51:58 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:51:58.658537 | orchestrator | 2025-09-20 09:51:58 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:51:58.658759 | orchestrator | 2025-09-20 09:51:58 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:51:58.661359 | orchestrator | 2025-09-20 09:51:58 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:51:58.661383 | orchestrator | 2025-09-20 09:51:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:01.692666 | orchestrator | 2025-09-20 09:52:01 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:01.692760 | orchestrator | 2025-09-20 09:52:01 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:01.693541 | orchestrator | 2025-09-20 09:52:01 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:01.694803 | orchestrator | 2025-09-20 09:52:01 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:01.696361 | orchestrator | 2025-09-20 09:52:01 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:01.696451 | orchestrator | 2025-09-20 09:52:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:04.727950 | orchestrator | 2025-09-20 09:52:04 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:04.728596 | orchestrator | 2025-09-20 09:52:04 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:04.729651 | orchestrator | 2025-09-20 09:52:04 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:04.730591 | orchestrator | 2025-09-20 09:52:04 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:04.732321 | orchestrator | 2025-09-20 09:52:04 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:04.732474 | orchestrator | 2025-09-20 09:52:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:07.759616 | orchestrator | 2025-09-20 09:52:07 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:07.759822 | orchestrator | 2025-09-20 09:52:07 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:07.763352 | orchestrator | 2025-09-20 09:52:07 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:07.764187 | orchestrator | 2025-09-20 09:52:07 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:07.764762 | orchestrator | 2025-09-20 09:52:07 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:07.764793 | orchestrator | 2025-09-20 09:52:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:10.804235 | orchestrator | 2025-09-20 09:52:10 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:10.804554 | orchestrator | 2025-09-20 09:52:10 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:10.805433 | orchestrator | 2025-09-20 09:52:10 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:10.807875 | orchestrator | 2025-09-20 09:52:10 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:10.808205 | orchestrator | 2025-09-20 09:52:10 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:10.808229 | orchestrator | 2025-09-20 09:52:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:13.833075 | orchestrator | 2025-09-20 09:52:13 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:13.833214 | orchestrator | 2025-09-20 09:52:13 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:13.833782 | orchestrator | 2025-09-20 09:52:13 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:13.834395 | orchestrator | 2025-09-20 09:52:13 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:13.835582 | orchestrator | 2025-09-20 09:52:13 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:13.835604 | orchestrator | 2025-09-20 09:52:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:16.861955 | orchestrator | 2025-09-20 09:52:16 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:16.864349 | orchestrator | 2025-09-20 09:52:16 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:16.864908 | orchestrator | 2025-09-20 09:52:16 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:16.865601 | orchestrator | 2025-09-20 09:52:16 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:16.866333 | orchestrator | 2025-09-20 09:52:16 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:16.866359 | orchestrator | 2025-09-20 09:52:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:19.890414 | orchestrator | 2025-09-20 09:52:19 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:19.890512 | orchestrator | 2025-09-20 09:52:19 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:19.890996 | orchestrator | 2025-09-20 09:52:19 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:19.893463 | orchestrator | 2025-09-20 09:52:19 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:19.893976 | orchestrator | 2025-09-20 09:52:19 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:19.893999 | orchestrator | 2025-09-20 09:52:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:22.920213 | orchestrator | 2025-09-20 09:52:22 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:22.921128 | orchestrator | 2025-09-20 09:52:22 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:22.922798 | orchestrator | 2025-09-20 09:52:22 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:22.923046 | orchestrator | 2025-09-20 09:52:22 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:22.923676 | orchestrator | 2025-09-20 09:52:22 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:22.923702 | orchestrator | 2025-09-20 09:52:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:25.956075 | orchestrator | 2025-09-20 09:52:25 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:25.956341 | orchestrator | 2025-09-20 09:52:25 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:25.957104 | orchestrator | 2025-09-20 09:52:25 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:25.957855 | orchestrator | 2025-09-20 09:52:25 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:25.958528 | orchestrator | 2025-09-20 09:52:25 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:25.958556 | orchestrator | 2025-09-20 09:52:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:29.012671 | orchestrator | 2025-09-20 09:52:29 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:29.013282 | orchestrator | 2025-09-20 09:52:29 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:29.013670 | orchestrator | 2025-09-20 09:52:29 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:29.015095 | orchestrator | 2025-09-20 09:52:29 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:29.015506 | orchestrator | 2025-09-20 09:52:29 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:29.015607 | orchestrator | 2025-09-20 09:52:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:32.047035 | orchestrator | 2025-09-20 09:52:32 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:32.047140 | orchestrator | 2025-09-20 09:52:32 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:32.047629 | orchestrator | 2025-09-20 09:52:32 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:32.048260 | orchestrator | 2025-09-20 09:52:32 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:32.048888 | orchestrator | 2025-09-20 09:52:32 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:32.048912 | orchestrator | 2025-09-20 09:52:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:35.071051 | orchestrator | 2025-09-20 09:52:35 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:35.071232 | orchestrator | 2025-09-20 09:52:35 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:35.071756 | orchestrator | 2025-09-20 09:52:35 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:35.072959 | orchestrator | 2025-09-20 09:52:35 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:35.073627 | orchestrator | 2025-09-20 09:52:35 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:35.073650 | orchestrator | 2025-09-20 09:52:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:38.098111 | orchestrator | 2025-09-20 09:52:38 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:38.098265 | orchestrator | 2025-09-20 09:52:38 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:38.098598 | orchestrator | 2025-09-20 09:52:38 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state STARTED 2025-09-20 09:52:38.099071 | orchestrator | 2025-09-20 09:52:38 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:38.099765 | orchestrator | 2025-09-20 09:52:38 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:38.099786 | orchestrator | 2025-09-20 09:52:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:41.173101 | orchestrator | 2025-09-20 09:52:41 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:41.173295 | orchestrator | 2025-09-20 09:52:41 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:41.173722 | orchestrator | 2025-09-20 09:52:41 | INFO  | Task 44ecfc8b-9b5f-4923-8249-b0b0547138dd is in state SUCCESS 2025-09-20 09:52:41.174212 | orchestrator | 2025-09-20 09:52:41.174241 | orchestrator | 2025-09-20 09:52:41.174253 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:52:41.174266 | orchestrator | 2025-09-20 09:52:41.174277 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:52:41.174288 | orchestrator | Saturday 20 September 2025 09:51:19 +0000 (0:00:00.247) 0:00:00.247 **** 2025-09-20 09:52:41.174299 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:52:41.174315 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:52:41.174333 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:52:41.174352 | orchestrator | ok: [testbed-manager] 2025-09-20 09:52:41.174369 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:52:41.174504 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:52:41.174518 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:52:41.174529 | orchestrator | 2025-09-20 09:52:41.174540 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:52:41.174551 | orchestrator | Saturday 20 September 2025 09:51:21 +0000 (0:00:01.421) 0:00:01.669 **** 2025-09-20 09:52:41.174562 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174573 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174602 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174614 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174624 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174635 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174645 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-20 09:52:41.174656 | orchestrator | 2025-09-20 09:52:41.174667 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-20 09:52:41.174678 | orchestrator | 2025-09-20 09:52:41.174689 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-20 09:52:41.174724 | orchestrator | Saturday 20 September 2025 09:51:22 +0000 (0:00:01.079) 0:00:02.749 **** 2025-09-20 09:52:41.174736 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:52:41.174749 | orchestrator | 2025-09-20 09:52:41.174759 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-20 09:52:41.174772 | orchestrator | Saturday 20 September 2025 09:51:24 +0000 (0:00:02.073) 0:00:04.822 **** 2025-09-20 09:52:41.174791 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-20 09:52:41.174821 | orchestrator | 2025-09-20 09:52:41.174839 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-20 09:52:41.174857 | orchestrator | Saturday 20 September 2025 09:51:27 +0000 (0:00:03.714) 0:00:08.537 **** 2025-09-20 09:52:41.174876 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-20 09:52:41.174896 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-20 09:52:41.174912 | orchestrator | 2025-09-20 09:52:41.174927 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-20 09:52:41.174942 | orchestrator | Saturday 20 September 2025 09:51:34 +0000 (0:00:06.592) 0:00:15.129 **** 2025-09-20 09:52:41.174959 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 09:52:41.174976 | orchestrator | 2025-09-20 09:52:41.174993 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-20 09:52:41.175012 | orchestrator | Saturday 20 September 2025 09:51:38 +0000 (0:00:03.483) 0:00:18.613 **** 2025-09-20 09:52:41.175029 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:52:41.175049 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-20 09:52:41.175067 | orchestrator | 2025-09-20 09:52:41.175086 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-20 09:52:41.175100 | orchestrator | Saturday 20 September 2025 09:51:42 +0000 (0:00:04.153) 0:00:22.766 **** 2025-09-20 09:52:41.175110 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:52:41.175337 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-20 09:52:41.175360 | orchestrator | 2025-09-20 09:52:41.175375 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-20 09:52:41.175388 | orchestrator | Saturday 20 September 2025 09:51:48 +0000 (0:00:05.812) 0:00:28.578 **** 2025-09-20 09:52:41.175400 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-20 09:52:41.175413 | orchestrator | 2025-09-20 09:52:41.175425 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:52:41.175438 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175452 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175470 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175494 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175520 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175556 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175574 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.175611 | orchestrator | 2025-09-20 09:52:41.175631 | orchestrator | 2025-09-20 09:52:41.175650 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:52:41.175668 | orchestrator | Saturday 20 September 2025 09:51:53 +0000 (0:00:04.997) 0:00:33.575 **** 2025-09-20 09:52:41.175683 | orchestrator | =============================================================================== 2025-09-20 09:52:41.175694 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.59s 2025-09-20 09:52:41.175705 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.81s 2025-09-20 09:52:41.175715 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.00s 2025-09-20 09:52:41.175726 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.15s 2025-09-20 09:52:41.175745 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.71s 2025-09-20 09:52:41.175756 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.48s 2025-09-20 09:52:41.175767 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.07s 2025-09-20 09:52:41.175777 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.42s 2025-09-20 09:52:41.175788 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.08s 2025-09-20 09:52:41.175799 | orchestrator | 2025-09-20 09:52:41.175809 | orchestrator | 2025-09-20 09:52:41.175820 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-20 09:52:41.175830 | orchestrator | 2025-09-20 09:52:41.175841 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-20 09:52:41.175851 | orchestrator | Saturday 20 September 2025 09:51:11 +0000 (0:00:00.269) 0:00:00.269 **** 2025-09-20 09:52:41.175862 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.175873 | orchestrator | 2025-09-20 09:52:41.175883 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-20 09:52:41.175894 | orchestrator | Saturday 20 September 2025 09:51:14 +0000 (0:00:02.227) 0:00:02.496 **** 2025-09-20 09:52:41.175904 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.175915 | orchestrator | 2025-09-20 09:52:41.175926 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-20 09:52:41.175936 | orchestrator | Saturday 20 September 2025 09:51:15 +0000 (0:00:01.023) 0:00:03.519 **** 2025-09-20 09:52:41.175947 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.175957 | orchestrator | 2025-09-20 09:52:41.175968 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-20 09:52:41.175978 | orchestrator | Saturday 20 September 2025 09:51:16 +0000 (0:00:01.438) 0:00:04.957 **** 2025-09-20 09:52:41.175989 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.176000 | orchestrator | 2025-09-20 09:52:41.176010 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-20 09:52:41.176021 | orchestrator | Saturday 20 September 2025 09:51:18 +0000 (0:00:02.190) 0:00:07.148 **** 2025-09-20 09:52:41.176031 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.176042 | orchestrator | 2025-09-20 09:52:41.176052 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-20 09:52:41.176063 | orchestrator | Saturday 20 September 2025 09:51:19 +0000 (0:00:00.919) 0:00:08.067 **** 2025-09-20 09:52:41.176074 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.176084 | orchestrator | 2025-09-20 09:52:41.176095 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-20 09:52:41.176106 | orchestrator | Saturday 20 September 2025 09:51:20 +0000 (0:00:00.876) 0:00:08.944 **** 2025-09-20 09:52:41.176116 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.176127 | orchestrator | 2025-09-20 09:52:41.176138 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-20 09:52:41.176148 | orchestrator | Saturday 20 September 2025 09:51:21 +0000 (0:00:01.223) 0:00:10.167 **** 2025-09-20 09:52:41.176167 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.176205 | orchestrator | 2025-09-20 09:52:41.176218 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-20 09:52:41.176228 | orchestrator | Saturday 20 September 2025 09:51:22 +0000 (0:00:00.851) 0:00:11.018 **** 2025-09-20 09:52:41.176239 | orchestrator | changed: [testbed-manager] 2025-09-20 09:52:41.176249 | orchestrator | 2025-09-20 09:52:41.176260 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-20 09:52:41.176271 | orchestrator | Saturday 20 September 2025 09:52:14 +0000 (0:00:52.163) 0:01:03.182 **** 2025-09-20 09:52:41.176281 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:52:41.176292 | orchestrator | 2025-09-20 09:52:41.176302 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-20 09:52:41.176313 | orchestrator | 2025-09-20 09:52:41.176324 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-20 09:52:41.176334 | orchestrator | Saturday 20 September 2025 09:52:14 +0000 (0:00:00.112) 0:01:03.295 **** 2025-09-20 09:52:41.176345 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:52:41.176356 | orchestrator | 2025-09-20 09:52:41.176366 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-20 09:52:41.176377 | orchestrator | 2025-09-20 09:52:41.176387 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-20 09:52:41.176398 | orchestrator | Saturday 20 September 2025 09:52:26 +0000 (0:00:11.812) 0:01:15.107 **** 2025-09-20 09:52:41.176409 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:52:41.176419 | orchestrator | 2025-09-20 09:52:41.176430 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-20 09:52:41.176440 | orchestrator | 2025-09-20 09:52:41.176451 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-20 09:52:41.176462 | orchestrator | Saturday 20 September 2025 09:52:28 +0000 (0:00:01.294) 0:01:16.402 **** 2025-09-20 09:52:41.176472 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:52:41.176483 | orchestrator | 2025-09-20 09:52:41.176503 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:52:41.176514 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-20 09:52:41.176525 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.176536 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.176547 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:52:41.176558 | orchestrator | 2025-09-20 09:52:41.176569 | orchestrator | 2025-09-20 09:52:41.176579 | orchestrator | 2025-09-20 09:52:41.176590 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:52:41.176615 | orchestrator | Saturday 20 September 2025 09:52:39 +0000 (0:00:11.313) 0:01:27.715 **** 2025-09-20 09:52:41.176626 | orchestrator | =============================================================================== 2025-09-20 09:52:41.176643 | orchestrator | Create admin user ------------------------------------------------------ 52.16s 2025-09-20 09:52:41.176661 | orchestrator | Restart ceph manager service ------------------------------------------- 24.42s 2025-09-20 09:52:41.176679 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.23s 2025-09-20 09:52:41.176697 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 2.19s 2025-09-20 09:52:41.176716 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.44s 2025-09-20 09:52:41.176734 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.22s 2025-09-20 09:52:41.176751 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2025-09-20 09:52:41.176783 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.92s 2025-09-20 09:52:41.176802 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.88s 2025-09-20 09:52:41.176816 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.85s 2025-09-20 09:52:41.176827 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2025-09-20 09:52:41.176838 | orchestrator | 2025-09-20 09:52:41 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:41.176849 | orchestrator | 2025-09-20 09:52:41 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:41.176859 | orchestrator | 2025-09-20 09:52:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:44.195308 | orchestrator | 2025-09-20 09:52:44 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:44.195615 | orchestrator | 2025-09-20 09:52:44 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:44.196263 | orchestrator | 2025-09-20 09:52:44 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:44.197213 | orchestrator | 2025-09-20 09:52:44 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:44.197322 | orchestrator | 2025-09-20 09:52:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:47.241314 | orchestrator | 2025-09-20 09:52:47 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:47.241423 | orchestrator | 2025-09-20 09:52:47 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:47.242169 | orchestrator | 2025-09-20 09:52:47 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:47.243589 | orchestrator | 2025-09-20 09:52:47 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:47.243624 | orchestrator | 2025-09-20 09:52:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:50.281259 | orchestrator | 2025-09-20 09:52:50 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:50.281340 | orchestrator | 2025-09-20 09:52:50 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:50.281349 | orchestrator | 2025-09-20 09:52:50 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:50.281356 | orchestrator | 2025-09-20 09:52:50 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:50.281363 | orchestrator | 2025-09-20 09:52:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:53.306077 | orchestrator | 2025-09-20 09:52:53 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:53.308062 | orchestrator | 2025-09-20 09:52:53 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:53.310832 | orchestrator | 2025-09-20 09:52:53 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:53.312340 | orchestrator | 2025-09-20 09:52:53 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:53.312363 | orchestrator | 2025-09-20 09:52:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:56.342788 | orchestrator | 2025-09-20 09:52:56 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:56.342895 | orchestrator | 2025-09-20 09:52:56 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:56.344526 | orchestrator | 2025-09-20 09:52:56 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:56.345028 | orchestrator | 2025-09-20 09:52:56 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:56.345366 | orchestrator | 2025-09-20 09:52:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:52:59.389937 | orchestrator | 2025-09-20 09:52:59 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:52:59.391091 | orchestrator | 2025-09-20 09:52:59 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:52:59.392460 | orchestrator | 2025-09-20 09:52:59 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:52:59.394507 | orchestrator | 2025-09-20 09:52:59 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:52:59.394530 | orchestrator | 2025-09-20 09:52:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:02.447483 | orchestrator | 2025-09-20 09:53:02 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:02.448108 | orchestrator | 2025-09-20 09:53:02 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:02.449341 | orchestrator | 2025-09-20 09:53:02 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:02.450669 | orchestrator | 2025-09-20 09:53:02 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:02.450696 | orchestrator | 2025-09-20 09:53:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:05.490164 | orchestrator | 2025-09-20 09:53:05 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:05.490454 | orchestrator | 2025-09-20 09:53:05 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:05.491473 | orchestrator | 2025-09-20 09:53:05 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:05.492658 | orchestrator | 2025-09-20 09:53:05 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:05.492683 | orchestrator | 2025-09-20 09:53:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:08.535132 | orchestrator | 2025-09-20 09:53:08 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:08.535290 | orchestrator | 2025-09-20 09:53:08 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:08.535304 | orchestrator | 2025-09-20 09:53:08 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:08.535776 | orchestrator | 2025-09-20 09:53:08 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:08.535801 | orchestrator | 2025-09-20 09:53:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:11.581751 | orchestrator | 2025-09-20 09:53:11 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:11.584563 | orchestrator | 2025-09-20 09:53:11 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:11.585015 | orchestrator | 2025-09-20 09:53:11 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:11.586081 | orchestrator | 2025-09-20 09:53:11 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:11.586106 | orchestrator | 2025-09-20 09:53:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:14.654628 | orchestrator | 2025-09-20 09:53:14 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:14.654749 | orchestrator | 2025-09-20 09:53:14 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:14.654763 | orchestrator | 2025-09-20 09:53:14 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:14.654773 | orchestrator | 2025-09-20 09:53:14 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:14.654783 | orchestrator | 2025-09-20 09:53:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:17.687786 | orchestrator | 2025-09-20 09:53:17 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:17.689498 | orchestrator | 2025-09-20 09:53:17 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:17.693448 | orchestrator | 2025-09-20 09:53:17 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:17.694836 | orchestrator | 2025-09-20 09:53:17 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:17.694891 | orchestrator | 2025-09-20 09:53:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:20.748454 | orchestrator | 2025-09-20 09:53:20 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:20.750452 | orchestrator | 2025-09-20 09:53:20 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:20.751847 | orchestrator | 2025-09-20 09:53:20 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:20.754726 | orchestrator | 2025-09-20 09:53:20 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:20.754760 | orchestrator | 2025-09-20 09:53:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:23.796909 | orchestrator | 2025-09-20 09:53:23 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:23.798391 | orchestrator | 2025-09-20 09:53:23 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:23.800929 | orchestrator | 2025-09-20 09:53:23 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:23.803409 | orchestrator | 2025-09-20 09:53:23 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:23.803434 | orchestrator | 2025-09-20 09:53:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:26.854985 | orchestrator | 2025-09-20 09:53:26 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:26.855063 | orchestrator | 2025-09-20 09:53:26 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:26.855633 | orchestrator | 2025-09-20 09:53:26 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:26.856411 | orchestrator | 2025-09-20 09:53:26 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:26.856431 | orchestrator | 2025-09-20 09:53:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:29.896697 | orchestrator | 2025-09-20 09:53:29 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:29.897274 | orchestrator | 2025-09-20 09:53:29 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:29.899405 | orchestrator | 2025-09-20 09:53:29 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:29.901851 | orchestrator | 2025-09-20 09:53:29 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:29.901890 | orchestrator | 2025-09-20 09:53:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:32.948426 | orchestrator | 2025-09-20 09:53:32 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:32.950193 | orchestrator | 2025-09-20 09:53:32 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:32.951848 | orchestrator | 2025-09-20 09:53:32 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:32.953421 | orchestrator | 2025-09-20 09:53:32 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:32.953446 | orchestrator | 2025-09-20 09:53:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:36.005159 | orchestrator | 2025-09-20 09:53:36 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:36.016225 | orchestrator | 2025-09-20 09:53:36 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:36.016263 | orchestrator | 2025-09-20 09:53:36 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:36.016475 | orchestrator | 2025-09-20 09:53:36 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:36.016490 | orchestrator | 2025-09-20 09:53:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:39.062300 | orchestrator | 2025-09-20 09:53:39 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:39.067577 | orchestrator | 2025-09-20 09:53:39 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:39.074099 | orchestrator | 2025-09-20 09:53:39 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:39.076560 | orchestrator | 2025-09-20 09:53:39 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:39.076603 | orchestrator | 2025-09-20 09:53:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:42.108279 | orchestrator | 2025-09-20 09:53:42 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:42.108406 | orchestrator | 2025-09-20 09:53:42 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:42.108922 | orchestrator | 2025-09-20 09:53:42 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:42.111591 | orchestrator | 2025-09-20 09:53:42 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:42.111623 | orchestrator | 2025-09-20 09:53:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:45.148946 | orchestrator | 2025-09-20 09:53:45 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:45.149052 | orchestrator | 2025-09-20 09:53:45 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:45.149068 | orchestrator | 2025-09-20 09:53:45 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:45.149080 | orchestrator | 2025-09-20 09:53:45 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:45.149092 | orchestrator | 2025-09-20 09:53:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:48.172985 | orchestrator | 2025-09-20 09:53:48 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:48.173998 | orchestrator | 2025-09-20 09:53:48 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:48.175093 | orchestrator | 2025-09-20 09:53:48 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:48.175735 | orchestrator | 2025-09-20 09:53:48 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:48.175773 | orchestrator | 2025-09-20 09:53:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:51.217012 | orchestrator | 2025-09-20 09:53:51 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:51.217779 | orchestrator | 2025-09-20 09:53:51 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:51.220766 | orchestrator | 2025-09-20 09:53:51 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:51.222434 | orchestrator | 2025-09-20 09:53:51 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:51.222461 | orchestrator | 2025-09-20 09:53:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:54.276332 | orchestrator | 2025-09-20 09:53:54 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:54.278063 | orchestrator | 2025-09-20 09:53:54 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:54.279613 | orchestrator | 2025-09-20 09:53:54 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:54.280876 | orchestrator | 2025-09-20 09:53:54 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:54.281014 | orchestrator | 2025-09-20 09:53:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:53:57.336749 | orchestrator | 2025-09-20 09:53:57 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:53:57.337974 | orchestrator | 2025-09-20 09:53:57 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:53:57.338385 | orchestrator | 2025-09-20 09:53:57 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:53:57.339138 | orchestrator | 2025-09-20 09:53:57 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:53:57.339165 | orchestrator | 2025-09-20 09:53:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:00.381355 | orchestrator | 2025-09-20 09:54:00 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:54:00.381723 | orchestrator | 2025-09-20 09:54:00 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:00.382605 | orchestrator | 2025-09-20 09:54:00 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:00.383392 | orchestrator | 2025-09-20 09:54:00 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:00.383460 | orchestrator | 2025-09-20 09:54:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:03.424590 | orchestrator | 2025-09-20 09:54:03 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:54:03.426427 | orchestrator | 2025-09-20 09:54:03 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:03.429874 | orchestrator | 2025-09-20 09:54:03 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:03.431458 | orchestrator | 2025-09-20 09:54:03 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:03.431480 | orchestrator | 2025-09-20 09:54:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:06.480716 | orchestrator | 2025-09-20 09:54:06 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:54:06.482182 | orchestrator | 2025-09-20 09:54:06 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:06.484498 | orchestrator | 2025-09-20 09:54:06 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:06.487026 | orchestrator | 2025-09-20 09:54:06 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:06.488142 | orchestrator | 2025-09-20 09:54:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:09.531207 | orchestrator | 2025-09-20 09:54:09 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:54:09.531512 | orchestrator | 2025-09-20 09:54:09 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:09.532616 | orchestrator | 2025-09-20 09:54:09 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:09.533667 | orchestrator | 2025-09-20 09:54:09 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:09.533688 | orchestrator | 2025-09-20 09:54:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:12.567772 | orchestrator | 2025-09-20 09:54:12 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:54:12.570273 | orchestrator | 2025-09-20 09:54:12 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:12.572756 | orchestrator | 2025-09-20 09:54:12 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:12.575310 | orchestrator | 2025-09-20 09:54:12 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:12.575336 | orchestrator | 2025-09-20 09:54:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:15.618908 | orchestrator | 2025-09-20 09:54:15 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state STARTED 2025-09-20 09:54:15.621817 | orchestrator | 2025-09-20 09:54:15 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:15.625732 | orchestrator | 2025-09-20 09:54:15 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:15.628090 | orchestrator | 2025-09-20 09:54:15 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:15.628115 | orchestrator | 2025-09-20 09:54:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:18.683555 | orchestrator | 2025-09-20 09:54:18 | INFO  | Task ba6cf55a-1c56-4064-a1ed-00d388c96087 is in state SUCCESS 2025-09-20 09:54:18.683637 | orchestrator | 2025-09-20 09:54:18 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:18.683646 | orchestrator | 2025-09-20 09:54:18 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:18.683653 | orchestrator | 2025-09-20 09:54:18 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:18.683660 | orchestrator | 2025-09-20 09:54:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:18.684653 | orchestrator | 2025-09-20 09:54:18.684683 | orchestrator | 2025-09-20 09:54:18.684695 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:54:18.684707 | orchestrator | 2025-09-20 09:54:18.684714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:54:18.684721 | orchestrator | Saturday 20 September 2025 09:51:20 +0000 (0:00:00.300) 0:00:00.300 **** 2025-09-20 09:54:18.684728 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:54:18.684735 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:54:18.684741 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:54:18.684767 | orchestrator | 2025-09-20 09:54:18.684774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:54:18.684780 | orchestrator | Saturday 20 September 2025 09:51:20 +0000 (0:00:00.251) 0:00:00.552 **** 2025-09-20 09:54:18.684786 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-20 09:54:18.684792 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-20 09:54:18.684799 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-20 09:54:18.684805 | orchestrator | 2025-09-20 09:54:18.684822 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-20 09:54:18.684829 | orchestrator | 2025-09-20 09:54:18.684835 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 09:54:18.684841 | orchestrator | Saturday 20 September 2025 09:51:20 +0000 (0:00:00.484) 0:00:01.037 **** 2025-09-20 09:54:18.684847 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:54:18.684854 | orchestrator | 2025-09-20 09:54:18.684860 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-20 09:54:18.684866 | orchestrator | Saturday 20 September 2025 09:51:21 +0000 (0:00:00.749) 0:00:01.786 **** 2025-09-20 09:54:18.684902 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-20 09:54:18.684909 | orchestrator | 2025-09-20 09:54:18.684915 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-20 09:54:18.684923 | orchestrator | Saturday 20 September 2025 09:51:25 +0000 (0:00:03.862) 0:00:05.649 **** 2025-09-20 09:54:18.684935 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-20 09:54:18.684945 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-20 09:54:18.684957 | orchestrator | 2025-09-20 09:54:18.684967 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-20 09:54:18.684978 | orchestrator | Saturday 20 September 2025 09:51:32 +0000 (0:00:06.920) 0:00:12.569 **** 2025-09-20 09:54:18.684985 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-20 09:54:18.684991 | orchestrator | 2025-09-20 09:54:18.684997 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-20 09:54:18.685003 | orchestrator | Saturday 20 September 2025 09:51:36 +0000 (0:00:03.525) 0:00:16.095 **** 2025-09-20 09:54:18.685010 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:54:18.685016 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-20 09:54:18.685022 | orchestrator | 2025-09-20 09:54:18.685028 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-20 09:54:18.685034 | orchestrator | Saturday 20 September 2025 09:51:40 +0000 (0:00:04.216) 0:00:20.311 **** 2025-09-20 09:54:18.685040 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:54:18.685047 | orchestrator | 2025-09-20 09:54:18.685053 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-20 09:54:18.685059 | orchestrator | Saturday 20 September 2025 09:51:43 +0000 (0:00:03.533) 0:00:23.845 **** 2025-09-20 09:54:18.685065 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-20 09:54:18.685071 | orchestrator | 2025-09-20 09:54:18.685077 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-20 09:54:18.685083 | orchestrator | Saturday 20 September 2025 09:51:48 +0000 (0:00:04.493) 0:00:28.338 **** 2025-09-20 09:54:18.685104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685147 | orchestrator | 2025-09-20 09:54:18.685153 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 09:54:18.685159 | orchestrator | Saturday 20 September 2025 09:51:52 +0000 (0:00:04.457) 0:00:32.796 **** 2025-09-20 09:54:18.685166 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:54:18.685172 | orchestrator | 2025-09-20 09:54:18.685182 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-20 09:54:18.685188 | orchestrator | Saturday 20 September 2025 09:51:53 +0000 (0:00:00.611) 0:00:33.407 **** 2025-09-20 09:54:18.685195 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:18.685201 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.685207 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:18.685213 | orchestrator | 2025-09-20 09:54:18.685234 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-20 09:54:18.685241 | orchestrator | Saturday 20 September 2025 09:51:57 +0000 (0:00:04.223) 0:00:37.630 **** 2025-09-20 09:54:18.685247 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:54:18.685253 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:54:18.685263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:54:18.685270 | orchestrator | 2025-09-20 09:54:18.685276 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-20 09:54:18.685282 | orchestrator | Saturday 20 September 2025 09:51:59 +0000 (0:00:01.420) 0:00:39.051 **** 2025-09-20 09:54:18.685288 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:54:18.685294 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:54:18.685300 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:54:18.685306 | orchestrator | 2025-09-20 09:54:18.685313 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-20 09:54:18.685319 | orchestrator | Saturday 20 September 2025 09:52:00 +0000 (0:00:01.213) 0:00:40.264 **** 2025-09-20 09:54:18.685325 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:54:18.685331 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:54:18.685337 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:54:18.685343 | orchestrator | 2025-09-20 09:54:18.685349 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-20 09:54:18.685355 | orchestrator | Saturday 20 September 2025 09:52:00 +0000 (0:00:00.782) 0:00:41.047 **** 2025-09-20 09:54:18.685362 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685368 | orchestrator | 2025-09-20 09:54:18.685374 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-20 09:54:18.685380 | orchestrator | Saturday 20 September 2025 09:52:01 +0000 (0:00:00.455) 0:00:41.502 **** 2025-09-20 09:54:18.685386 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685392 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685398 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685405 | orchestrator | 2025-09-20 09:54:18.685411 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 09:54:18.685422 | orchestrator | Saturday 20 September 2025 09:52:01 +0000 (0:00:00.300) 0:00:41.804 **** 2025-09-20 09:54:18.685428 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:54:18.685434 | orchestrator | 2025-09-20 09:54:18.685441 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-20 09:54:18.685447 | orchestrator | Saturday 20 September 2025 09:52:02 +0000 (0:00:00.640) 0:00:42.444 **** 2025-09-20 09:54:18.685457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685488 | orchestrator | 2025-09-20 09:54:18.685494 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-20 09:54:18.685500 | orchestrator | Saturday 20 September 2025 09:52:07 +0000 (0:00:04.829) 0:00:47.274 **** 2025-09-20 09:54:18.685515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:54:18.685523 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:54:18.685542 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:54:18.685561 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685567 | orchestrator | 2025-09-20 09:54:18.685577 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-20 09:54:18.685583 | orchestrator | Saturday 20 September 2025 09:52:12 +0000 (0:00:05.091) 0:00:52.366 **** 2025-09-20 09:54:18.685590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:54:18.685601 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:54:18.685619 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-20 09:54:18.685643 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685650 | orchestrator | 2025-09-20 09:54:18.685656 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-20 09:54:18.685662 | orchestrator | Saturday 20 September 2025 09:52:16 +0000 (0:00:03.899) 0:00:56.265 **** 2025-09-20 09:54:18.685668 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685674 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685680 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685686 | orchestrator | 2025-09-20 09:54:18.685693 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-20 09:54:18.685699 | orchestrator | Saturday 20 September 2025 09:52:19 +0000 (0:00:03.444) 0:00:59.710 **** 2025-09-20 09:54:18.685709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.685737 | orchestrator | 2025-09-20 09:54:18.685743 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-20 09:54:18.685750 | orchestrator | Saturday 20 September 2025 09:52:24 +0000 (0:00:04.876) 0:01:04.586 **** 2025-09-20 09:54:18.685756 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:18.685762 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:18.685768 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.685774 | orchestrator | 2025-09-20 09:54:18.685780 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-20 09:54:18.685786 | orchestrator | Saturday 20 September 2025 09:52:33 +0000 (0:00:09.321) 0:01:13.908 **** 2025-09-20 09:54:18.685792 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685798 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685804 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685810 | orchestrator | 2025-09-20 09:54:18.685816 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-20 09:54:18.685901 | orchestrator | Saturday 20 September 2025 09:52:39 +0000 (0:00:05.925) 0:01:19.834 **** 2025-09-20 09:54:18.685909 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685915 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685921 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685927 | orchestrator | 2025-09-20 09:54:18.685933 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-20 09:54:18.685940 | orchestrator | Saturday 20 September 2025 09:52:44 +0000 (0:00:05.033) 0:01:24.867 **** 2025-09-20 09:54:18.685950 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.685957 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685963 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.685969 | orchestrator | 2025-09-20 09:54:18.685975 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-20 09:54:18.685981 | orchestrator | Saturday 20 September 2025 09:52:48 +0000 (0:00:03.821) 0:01:28.689 **** 2025-09-20 09:54:18.685987 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.685997 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.686003 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.686009 | orchestrator | 2025-09-20 09:54:18.686054 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-20 09:54:18.686066 | orchestrator | Saturday 20 September 2025 09:52:52 +0000 (0:00:03.735) 0:01:32.425 **** 2025-09-20 09:54:18.686078 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.686088 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.686099 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.686107 | orchestrator | 2025-09-20 09:54:18.686113 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-20 09:54:18.686120 | orchestrator | Saturday 20 September 2025 09:52:52 +0000 (0:00:00.295) 0:01:32.720 **** 2025-09-20 09:54:18.686126 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-20 09:54:18.686132 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.686138 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-20 09:54:18.686144 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.686150 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-20 09:54:18.686156 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.686162 | orchestrator | 2025-09-20 09:54:18.686169 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-20 09:54:18.686175 | orchestrator | Saturday 20 September 2025 09:52:55 +0000 (0:00:02.737) 0:01:35.458 **** 2025-09-20 09:54:18.686182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.686199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.686212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-20 09:54:18.686240 | orchestrator | 2025-09-20 09:54:18.686246 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-20 09:54:18.686252 | orchestrator | Saturday 20 September 2025 09:52:59 +0000 (0:00:03.732) 0:01:39.190 **** 2025-09-20 09:54:18.686258 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:18.686264 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:18.686271 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:18.686277 | orchestrator | 2025-09-20 09:54:18.686283 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-20 09:54:18.686289 | orchestrator | Saturday 20 September 2025 09:52:59 +0000 (0:00:00.318) 0:01:39.509 **** 2025-09-20 09:54:18.686295 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.686306 | orchestrator | 2025-09-20 09:54:18.686312 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-20 09:54:18.686318 | orchestrator | Saturday 20 September 2025 09:53:01 +0000 (0:00:02.059) 0:01:41.568 **** 2025-09-20 09:54:18.686324 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.686331 | orchestrator | 2025-09-20 09:54:18.686337 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-20 09:54:18.686343 | orchestrator | Saturday 20 September 2025 09:53:03 +0000 (0:00:02.053) 0:01:43.621 **** 2025-09-20 09:54:18.686349 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.686355 | orchestrator | 2025-09-20 09:54:18.686361 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-20 09:54:18.686367 | orchestrator | Saturday 20 September 2025 09:53:05 +0000 (0:00:01.953) 0:01:45.574 **** 2025-09-20 09:54:18.686374 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.686380 | orchestrator | 2025-09-20 09:54:18.686386 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-20 09:54:18.686392 | orchestrator | Saturday 20 September 2025 09:53:33 +0000 (0:00:27.976) 0:02:13.551 **** 2025-09-20 09:54:18.686398 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.686405 | orchestrator | 2025-09-20 09:54:18.686414 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-20 09:54:18.686421 | orchestrator | Saturday 20 September 2025 09:53:35 +0000 (0:00:02.096) 0:02:15.647 **** 2025-09-20 09:54:18.686427 | orchestrator | 2025-09-20 09:54:18.686433 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-20 09:54:18.686439 | orchestrator | Saturday 20 September 2025 09:53:35 +0000 (0:00:00.073) 0:02:15.720 **** 2025-09-20 09:54:18.686445 | orchestrator | 2025-09-20 09:54:18.686452 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-20 09:54:18.686458 | orchestrator | Saturday 20 September 2025 09:53:35 +0000 (0:00:00.072) 0:02:15.792 **** 2025-09-20 09:54:18.686464 | orchestrator | 2025-09-20 09:54:18.686470 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-20 09:54:18.686476 | orchestrator | Saturday 20 September 2025 09:53:35 +0000 (0:00:00.071) 0:02:15.864 **** 2025-09-20 09:54:18.686482 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:18.686492 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:18.686498 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:18.686504 | orchestrator | 2025-09-20 09:54:18.686510 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:54:18.686517 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 09:54:18.686525 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 09:54:18.686531 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 09:54:18.686538 | orchestrator | 2025-09-20 09:54:18.686545 | orchestrator | 2025-09-20 09:54:18.686552 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:54:18.686559 | orchestrator | Saturday 20 September 2025 09:54:18 +0000 (0:00:42.293) 0:02:58.157 **** 2025-09-20 09:54:18.686566 | orchestrator | =============================================================================== 2025-09-20 09:54:18.686572 | orchestrator | glance : Restart glance-api container ---------------------------------- 42.29s 2025-09-20 09:54:18.686580 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.98s 2025-09-20 09:54:18.686587 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.32s 2025-09-20 09:54:18.686594 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.92s 2025-09-20 09:54:18.686601 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.93s 2025-09-20 09:54:18.686612 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.09s 2025-09-20 09:54:18.686619 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.03s 2025-09-20 09:54:18.686626 | orchestrator | glance : Copying over config.json files for services -------------------- 4.88s 2025-09-20 09:54:18.686633 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.83s 2025-09-20 09:54:18.686640 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.49s 2025-09-20 09:54:18.686647 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.46s 2025-09-20 09:54:18.686654 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.22s 2025-09-20 09:54:18.686661 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.22s 2025-09-20 09:54:18.686668 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.90s 2025-09-20 09:54:18.686675 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.86s 2025-09-20 09:54:18.686682 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.82s 2025-09-20 09:54:18.686689 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.74s 2025-09-20 09:54:18.686696 | orchestrator | glance : Check glance containers ---------------------------------------- 3.73s 2025-09-20 09:54:18.686703 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.53s 2025-09-20 09:54:18.686709 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.53s 2025-09-20 09:54:21.726883 | orchestrator | 2025-09-20 09:54:21 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:21.726985 | orchestrator | 2025-09-20 09:54:21 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:21.727556 | orchestrator | 2025-09-20 09:54:21 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:21.728400 | orchestrator | 2025-09-20 09:54:21 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:21.728431 | orchestrator | 2025-09-20 09:54:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:24.765924 | orchestrator | 2025-09-20 09:54:24 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:24.766389 | orchestrator | 2025-09-20 09:54:24 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:24.766772 | orchestrator | 2025-09-20 09:54:24 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:24.767688 | orchestrator | 2025-09-20 09:54:24 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:24.767711 | orchestrator | 2025-09-20 09:54:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:27.801878 | orchestrator | 2025-09-20 09:54:27 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:27.802711 | orchestrator | 2025-09-20 09:54:27 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:27.804744 | orchestrator | 2025-09-20 09:54:27 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:27.806681 | orchestrator | 2025-09-20 09:54:27 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:27.807087 | orchestrator | 2025-09-20 09:54:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:30.846817 | orchestrator | 2025-09-20 09:54:30 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:30.848078 | orchestrator | 2025-09-20 09:54:30 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:30.849932 | orchestrator | 2025-09-20 09:54:30 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:30.851366 | orchestrator | 2025-09-20 09:54:30 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state STARTED 2025-09-20 09:54:30.851699 | orchestrator | 2025-09-20 09:54:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:33.890332 | orchestrator | 2025-09-20 09:54:33 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:33.892305 | orchestrator | 2025-09-20 09:54:33 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:33.893777 | orchestrator | 2025-09-20 09:54:33 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:33.898328 | orchestrator | 2025-09-20 09:54:33 | INFO  | Task 1aa2ac7f-8d33-4719-b833-18983fbb0ed1 is in state SUCCESS 2025-09-20 09:54:33.898664 | orchestrator | 2025-09-20 09:54:33.900940 | orchestrator | 2025-09-20 09:54:33.900975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:54:33.900988 | orchestrator | 2025-09-20 09:54:33.900999 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:54:33.901010 | orchestrator | Saturday 20 September 2025 09:51:11 +0000 (0:00:00.289) 0:00:00.289 **** 2025-09-20 09:54:33.901057 | orchestrator | ok: [testbed-manager] 2025-09-20 09:54:33.901072 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:54:33.901083 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:54:33.901095 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:54:33.901106 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:54:33.901117 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:54:33.901128 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:54:33.901138 | orchestrator | 2025-09-20 09:54:33.901149 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:54:33.901161 | orchestrator | Saturday 20 September 2025 09:51:13 +0000 (0:00:01.154) 0:00:01.444 **** 2025-09-20 09:54:33.901172 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901183 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901194 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901205 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901215 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901248 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901259 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-20 09:54:33.901270 | orchestrator | 2025-09-20 09:54:33.901280 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-20 09:54:33.901291 | orchestrator | 2025-09-20 09:54:33.901302 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-20 09:54:33.901313 | orchestrator | Saturday 20 September 2025 09:51:13 +0000 (0:00:00.792) 0:00:02.236 **** 2025-09-20 09:54:33.901436 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:54:33.901451 | orchestrator | 2025-09-20 09:54:33.901462 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-20 09:54:33.901473 | orchestrator | Saturday 20 September 2025 09:51:15 +0000 (0:00:01.606) 0:00:03.842 **** 2025-09-20 09:54:33.901487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:54:33.901576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901594 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:54:33.901770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.901817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.901978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.901996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902073 | orchestrator | 2025-09-20 09:54:33.902088 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-20 09:54:33.902099 | orchestrator | Saturday 20 September 2025 09:51:19 +0000 (0:00:04.328) 0:00:08.171 **** 2025-09-20 09:54:33.902110 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:54:33.902131 | orchestrator | 2025-09-20 09:54:33.902142 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-20 09:54:33.902153 | orchestrator | Saturday 20 September 2025 09:51:21 +0000 (0:00:01.801) 0:00:09.972 **** 2025-09-20 09:54:33.902164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:54:33.902176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902380 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.902392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902420 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:54:33.902514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902611 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.902662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.902711 | orchestrator | 2025-09-20 09:54:33.902722 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-20 09:54:33.902733 | orchestrator | Saturday 20 September 2025 09:51:28 +0000 (0:00:06.493) 0:00:16.466 **** 2025-09-20 09:54:33.902745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.902757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.902796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902814 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 09:54:33.902833 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.902844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.902856 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 09:54:33.902869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.902896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.902946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.902969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.902996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903069 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.903081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903092 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.903103 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.903114 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.903125 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.903136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903356 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.903367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903420 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.903432 | orchestrator | 2025-09-20 09:54:33.903443 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-20 09:54:33.903454 | orchestrator | Saturday 20 September 2025 09:51:29 +0000 (0:00:01.501) 0:00:17.967 **** 2025-09-20 09:54:33.903466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-20 09:54:33.903477 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-20 09:54:33.903521 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903652 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.903672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903745 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.903756 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.903767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-20 09:54:33.903842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903854 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.903866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903888 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.903899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.903911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.903939 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.904059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-20 09:54:33.904072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.904092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-20 09:54:33.904105 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.904116 | orchestrator | 2025-09-20 09:54:33.904127 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-20 09:54:33.904138 | orchestrator | Saturday 20 September 2025 09:51:31 +0000 (0:00:02.014) 0:00:19.982 **** 2025-09-20 09:54:33.904149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904161 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:54:33.904173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904297 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.904332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904362 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904379 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904478 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:54:33.904491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.904561 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.904611 | orchestrator | 2025-09-20 09:54:33.904622 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-20 09:54:33.904633 | orchestrator | Saturday 20 September 2025 09:51:37 +0000 (0:00:05.859) 0:00:25.842 **** 2025-09-20 09:54:33.904644 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:54:33.904655 | orchestrator | 2025-09-20 09:54:33.904666 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-20 09:54:33.904683 | orchestrator | Saturday 20 September 2025 09:51:38 +0000 (0:00:01.031) 0:00:26.874 **** 2025-09-20 09:54:33.904695 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904707 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904726 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904737 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904757 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904769 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904786 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904797 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904813 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904824 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904833 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904848 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904858 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1856848, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.904874 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904884 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904900 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904910 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904934 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904944 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904959 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904970 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904986 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.904996 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905006 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905016 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1856870, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1884851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.905030 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905041 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905057 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905073 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905083 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905093 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905103 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905117 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905127 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905149 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905169 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905179 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905189 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905203 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905214 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905251 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.ru2025-09-20 09:54:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:33.905601 | orchestrator | les', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905615 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905626 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905635 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905645 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905660 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905671 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905694 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905705 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905715 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1856844, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.179485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.905725 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905735 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905768 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905784 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905794 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905804 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905814 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905824 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905838 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905854 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905869 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905880 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905890 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905900 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905910 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905923 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905940 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905955 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1856865, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1850903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.905965 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905975 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905985 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.905995 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906058 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906072 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906088 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906099 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906109 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906119 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906129 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906150 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906160 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906175 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906186 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906196 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1856839, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1771517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906206 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906216 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906290 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906303 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906320 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906331 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906342 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906353 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906386 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906398 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906416 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906427 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1856850, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1811695, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906438 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906449 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906467 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906478 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.906494 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906506 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906517 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.906528 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.906545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906556 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.906567 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906579 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906590 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906624 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906648 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906659 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.906669 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1856864, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906679 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-20 09:54:33.906698 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.906706 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1856855, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1856846, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1804852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906726 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856869, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1881237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906734 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856837, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1762247, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906748 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1856877, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1904852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1856868, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1874852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906766 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1856840, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1774852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906779 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1856838, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.176534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906788 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1856863, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1844852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906838 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1856858, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.182485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906849 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1856875, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1900942, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-20 09:54:33.906857 | orchestrator | 2025-09-20 09:54:33.906865 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-20 09:54:33.906874 | orchestrator | Saturday 20 September 2025 09:52:05 +0000 (0:00:27.438) 0:00:54.313 **** 2025-09-20 09:54:33.906881 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:54:33.906889 | orchestrator | 2025-09-20 09:54:33.906901 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-20 09:54:33.906909 | orchestrator | Saturday 20 September 2025 09:52:06 +0000 (0:00:00.766) 0:00:55.080 **** 2025-09-20 09:54:33.906918 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.906926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.906934 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.906942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.906950 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.906958 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:54:33.906966 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.906974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.906988 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.906996 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907003 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.907011 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.907019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907026 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.907034 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907042 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.907050 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.907058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907065 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.907073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907081 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.907088 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.907096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907104 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.907112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907120 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.907128 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.907135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907143 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.907151 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907159 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.907166 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.907174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907182 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-20 09:54:33.907190 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-20 09:54:33.907197 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-20 09:54:33.907205 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:54:33.907213 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-20 09:54:33.907234 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-20 09:54:33.907243 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 09:54:33.907251 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 09:54:33.907258 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 09:54:33.907266 | orchestrator | 2025-09-20 09:54:33.907274 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-20 09:54:33.907286 | orchestrator | Saturday 20 September 2025 09:52:09 +0000 (0:00:02.645) 0:00:57.725 **** 2025-09-20 09:54:33.907294 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 09:54:33.907302 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 09:54:33.907310 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.907317 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.907325 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 09:54:33.907333 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.907340 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 09:54:33.907348 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.907361 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 09:54:33.907369 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.907376 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-20 09:54:33.907384 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.907392 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-20 09:54:33.907400 | orchestrator | 2025-09-20 09:54:33.907408 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-20 09:54:33.907415 | orchestrator | Saturday 20 September 2025 09:52:30 +0000 (0:00:20.888) 0:01:18.613 **** 2025-09-20 09:54:33.907423 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 09:54:33.907435 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.907443 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 09:54:33.907451 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 09:54:33.907458 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.907466 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.907474 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 09:54:33.907482 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.907489 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 09:54:33.907497 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.907505 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-20 09:54:33.907513 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.907521 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-20 09:54:33.907528 | orchestrator | 2025-09-20 09:54:33.907536 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-20 09:54:33.907544 | orchestrator | Saturday 20 September 2025 09:52:35 +0000 (0:00:05.450) 0:01:24.063 **** 2025-09-20 09:54:33.907552 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 09:54:33.907560 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 09:54:33.907568 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 09:54:33.907576 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.907584 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.907591 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.907599 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 09:54:33.907607 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.907615 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 09:54:33.907622 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.907630 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-20 09:54:33.907638 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-20 09:54:33.907646 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.907654 | orchestrator | 2025-09-20 09:54:33.907661 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-20 09:54:33.907675 | orchestrator | Saturday 20 September 2025 09:52:38 +0000 (0:00:02.474) 0:01:26.538 **** 2025-09-20 09:54:33.907683 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:54:33.907690 | orchestrator | 2025-09-20 09:54:33.907698 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-20 09:54:33.907706 | orchestrator | Saturday 20 September 2025 09:52:38 +0000 (0:00:00.642) 0:01:27.180 **** 2025-09-20 09:54:33.907714 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.907721 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.907729 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.907737 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.907745 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.907752 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.907764 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.907772 | orchestrator | 2025-09-20 09:54:33.907779 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-20 09:54:33.907787 | orchestrator | Saturday 20 September 2025 09:52:39 +0000 (0:00:00.963) 0:01:28.144 **** 2025-09-20 09:54:33.907795 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.907803 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.907810 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:33.907818 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.907826 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.907833 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:33.907841 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:33.907849 | orchestrator | 2025-09-20 09:54:33.907856 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-20 09:54:33.907864 | orchestrator | Saturday 20 September 2025 09:52:42 +0000 (0:00:02.741) 0:01:30.886 **** 2025-09-20 09:54:33.907872 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907880 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.907887 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907895 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.907903 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907911 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.907918 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907926 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.907938 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907946 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.907954 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907961 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.907969 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-20 09:54:33.907977 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.907984 | orchestrator | 2025-09-20 09:54:33.907992 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-20 09:54:33.908000 | orchestrator | Saturday 20 September 2025 09:52:44 +0000 (0:00:02.425) 0:01:33.311 **** 2025-09-20 09:54:33.908008 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 09:54:33.908015 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.908023 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 09:54:33.908031 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.908039 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 09:54:33.908051 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.908059 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 09:54:33.908067 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.908075 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 09:54:33.908082 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.908090 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-20 09:54:33.908098 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.908106 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-20 09:54:33.908114 | orchestrator | 2025-09-20 09:54:33.908121 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-20 09:54:33.908129 | orchestrator | Saturday 20 September 2025 09:52:47 +0000 (0:00:02.073) 0:01:35.384 **** 2025-09-20 09:54:33.908137 | orchestrator | [WARNING]: Skipped 2025-09-20 09:54:33.908145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-20 09:54:33.908152 | orchestrator | due to this access issue: 2025-09-20 09:54:33.908160 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-20 09:54:33.908168 | orchestrator | not a directory 2025-09-20 09:54:33.908176 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-20 09:54:33.908184 | orchestrator | 2025-09-20 09:54:33.908191 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-20 09:54:33.908199 | orchestrator | Saturday 20 September 2025 09:52:47 +0000 (0:00:00.902) 0:01:36.286 **** 2025-09-20 09:54:33.908207 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.908215 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.908235 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.908243 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.908251 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.908259 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.908266 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.908274 | orchestrator | 2025-09-20 09:54:33.908282 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-20 09:54:33.908290 | orchestrator | Saturday 20 September 2025 09:52:48 +0000 (0:00:00.696) 0:01:36.983 **** 2025-09-20 09:54:33.908297 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.908305 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:54:33.908313 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:54:33.908320 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:54:33.908328 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:54:33.908339 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:54:33.908347 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:54:33.908355 | orchestrator | 2025-09-20 09:54:33.908363 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-20 09:54:33.908370 | orchestrator | Saturday 20 September 2025 09:52:49 +0000 (0:00:00.557) 0:01:37.540 **** 2025-09-20 09:54:33.908378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908392 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-20 09:54:33.908417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908434 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908492 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-20 09:54:33.908526 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-20 09:54:33.908539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908565 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908590 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-20 09:54:33.908693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-20 09:54:33.908701 | orchestrator | 2025-09-20 09:54:33.908709 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-20 09:54:33.908717 | orchestrator | Saturday 20 September 2025 09:52:53 +0000 (0:00:04.292) 0:01:41.833 **** 2025-09-20 09:54:33.908724 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-20 09:54:33.908732 | orchestrator | skipping: [testbed-manager] 2025-09-20 09:54:33.908740 | orchestrator | 2025-09-20 09:54:33.908748 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908756 | orchestrator | Saturday 20 September 2025 09:52:54 +0000 (0:00:01.046) 0:01:42.880 **** 2025-09-20 09:54:33.908768 | orchestrator | 2025-09-20 09:54:33.908780 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908788 | orchestrator | Saturday 20 September 2025 09:52:54 +0000 (0:00:00.062) 0:01:42.942 **** 2025-09-20 09:54:33.908795 | orchestrator | 2025-09-20 09:54:33.908803 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908811 | orchestrator | Saturday 20 September 2025 09:52:54 +0000 (0:00:00.060) 0:01:43.003 **** 2025-09-20 09:54:33.908819 | orchestrator | 2025-09-20 09:54:33.908826 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908834 | orchestrator | Saturday 20 September 2025 09:52:54 +0000 (0:00:00.060) 0:01:43.063 **** 2025-09-20 09:54:33.908842 | orchestrator | 2025-09-20 09:54:33.908850 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908857 | orchestrator | Saturday 20 September 2025 09:52:54 +0000 (0:00:00.181) 0:01:43.245 **** 2025-09-20 09:54:33.908865 | orchestrator | 2025-09-20 09:54:33.908873 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908881 | orchestrator | Saturday 20 September 2025 09:52:54 +0000 (0:00:00.062) 0:01:43.307 **** 2025-09-20 09:54:33.908888 | orchestrator | 2025-09-20 09:54:33.908896 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-20 09:54:33.908904 | orchestrator | Saturday 20 September 2025 09:52:55 +0000 (0:00:00.062) 0:01:43.370 **** 2025-09-20 09:54:33.908911 | orchestrator | 2025-09-20 09:54:33.908919 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-20 09:54:33.908927 | orchestrator | Saturday 20 September 2025 09:52:55 +0000 (0:00:00.090) 0:01:43.460 **** 2025-09-20 09:54:33.908935 | orchestrator | changed: [testbed-manager] 2025-09-20 09:54:33.908942 | orchestrator | 2025-09-20 09:54:33.908950 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-20 09:54:33.908962 | orchestrator | Saturday 20 September 2025 09:53:12 +0000 (0:00:17.796) 0:02:01.257 **** 2025-09-20 09:54:33.908970 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:54:33.908978 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:33.908986 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:54:33.908994 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:54:33.909002 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:33.909009 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:33.909017 | orchestrator | changed: [testbed-manager] 2025-09-20 09:54:33.909025 | orchestrator | 2025-09-20 09:54:33.909033 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-20 09:54:33.909041 | orchestrator | Saturday 20 September 2025 09:53:25 +0000 (0:00:12.784) 0:02:14.042 **** 2025-09-20 09:54:33.909048 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:33.909056 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:33.909063 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:33.909071 | orchestrator | 2025-09-20 09:54:33.909079 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-20 09:54:33.909087 | orchestrator | Saturday 20 September 2025 09:53:35 +0000 (0:00:09.661) 0:02:23.703 **** 2025-09-20 09:54:33.909095 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:33.909102 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:33.909110 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:33.909118 | orchestrator | 2025-09-20 09:54:33.909126 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-20 09:54:33.909134 | orchestrator | Saturday 20 September 2025 09:53:47 +0000 (0:00:11.672) 0:02:35.375 **** 2025-09-20 09:54:33.909141 | orchestrator | changed: [testbed-manager] 2025-09-20 09:54:33.909149 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:54:33.909157 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:33.909164 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:33.909172 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:54:33.909180 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:54:33.909192 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:33.909200 | orchestrator | 2025-09-20 09:54:33.909208 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-20 09:54:33.909216 | orchestrator | Saturday 20 September 2025 09:54:03 +0000 (0:00:16.099) 0:02:51.475 **** 2025-09-20 09:54:33.909261 | orchestrator | changed: [testbed-manager] 2025-09-20 09:54:33.909269 | orchestrator | 2025-09-20 09:54:33.909277 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-20 09:54:33.909285 | orchestrator | Saturday 20 September 2025 09:54:11 +0000 (0:00:08.344) 0:02:59.820 **** 2025-09-20 09:54:33.909293 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:54:33.909301 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:54:33.909308 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:54:33.909316 | orchestrator | 2025-09-20 09:54:33.909324 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-20 09:54:33.909332 | orchestrator | Saturday 20 September 2025 09:54:16 +0000 (0:00:04.898) 0:03:04.718 **** 2025-09-20 09:54:33.909340 | orchestrator | changed: [testbed-manager] 2025-09-20 09:54:33.909347 | orchestrator | 2025-09-20 09:54:33.909355 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-20 09:54:33.909363 | orchestrator | Saturday 20 September 2025 09:54:21 +0000 (0:00:04.914) 0:03:09.633 **** 2025-09-20 09:54:33.909371 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:54:33.909378 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:54:33.909386 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:54:33.909394 | orchestrator | 2025-09-20 09:54:33.909401 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:54:33.909409 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 09:54:33.909418 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 09:54:33.909426 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 09:54:33.909438 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 09:54:33.909446 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 09:54:33.909453 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 09:54:33.909461 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-20 09:54:33.909469 | orchestrator | 2025-09-20 09:54:33.909477 | orchestrator | 2025-09-20 09:54:33.909485 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:54:33.909493 | orchestrator | Saturday 20 September 2025 09:54:33 +0000 (0:00:11.917) 0:03:21.550 **** 2025-09-20 09:54:33.909500 | orchestrator | =============================================================================== 2025-09-20 09:54:33.909508 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.44s 2025-09-20 09:54:33.909516 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.89s 2025-09-20 09:54:33.909524 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.80s 2025-09-20 09:54:33.909531 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.10s 2025-09-20 09:54:33.909539 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.78s 2025-09-20 09:54:33.909551 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.92s 2025-09-20 09:54:33.909565 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.67s 2025-09-20 09:54:33.909573 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.66s 2025-09-20 09:54:33.909580 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.34s 2025-09-20 09:54:33.909588 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.49s 2025-09-20 09:54:33.909596 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.86s 2025-09-20 09:54:33.909604 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.45s 2025-09-20 09:54:33.909611 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.91s 2025-09-20 09:54:33.909619 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 4.90s 2025-09-20 09:54:33.909627 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.33s 2025-09-20 09:54:33.909634 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.29s 2025-09-20 09:54:33.909642 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.74s 2025-09-20 09:54:33.909650 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.65s 2025-09-20 09:54:33.909658 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.47s 2025-09-20 09:54:33.909665 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.43s 2025-09-20 09:54:36.960574 | orchestrator | 2025-09-20 09:54:36 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:36.971545 | orchestrator | 2025-09-20 09:54:36 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:36.971585 | orchestrator | 2025-09-20 09:54:36 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:36.972337 | orchestrator | 2025-09-20 09:54:36 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:36.974148 | orchestrator | 2025-09-20 09:54:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:40.029595 | orchestrator | 2025-09-20 09:54:40 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:40.031507 | orchestrator | 2025-09-20 09:54:40 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:40.034347 | orchestrator | 2025-09-20 09:54:40 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:40.036425 | orchestrator | 2025-09-20 09:54:40 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:40.036458 | orchestrator | 2025-09-20 09:54:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:43.073133 | orchestrator | 2025-09-20 09:54:43 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:43.074533 | orchestrator | 2025-09-20 09:54:43 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:43.075601 | orchestrator | 2025-09-20 09:54:43 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:43.077176 | orchestrator | 2025-09-20 09:54:43 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:43.077217 | orchestrator | 2025-09-20 09:54:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:46.124999 | orchestrator | 2025-09-20 09:54:46 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:46.127174 | orchestrator | 2025-09-20 09:54:46 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:46.128874 | orchestrator | 2025-09-20 09:54:46 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:46.130413 | orchestrator | 2025-09-20 09:54:46 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:46.130626 | orchestrator | 2025-09-20 09:54:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:49.176195 | orchestrator | 2025-09-20 09:54:49 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:49.177943 | orchestrator | 2025-09-20 09:54:49 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:49.180118 | orchestrator | 2025-09-20 09:54:49 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:49.181474 | orchestrator | 2025-09-20 09:54:49 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:49.181753 | orchestrator | 2025-09-20 09:54:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:52.226981 | orchestrator | 2025-09-20 09:54:52 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:52.229154 | orchestrator | 2025-09-20 09:54:52 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:52.230907 | orchestrator | 2025-09-20 09:54:52 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:52.233316 | orchestrator | 2025-09-20 09:54:52 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:52.233544 | orchestrator | 2025-09-20 09:54:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:55.286088 | orchestrator | 2025-09-20 09:54:55 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:55.287740 | orchestrator | 2025-09-20 09:54:55 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:55.289993 | orchestrator | 2025-09-20 09:54:55 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:55.293991 | orchestrator | 2025-09-20 09:54:55 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:55.294124 | orchestrator | 2025-09-20 09:54:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:54:58.335511 | orchestrator | 2025-09-20 09:54:58 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:54:58.336774 | orchestrator | 2025-09-20 09:54:58 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:54:58.338549 | orchestrator | 2025-09-20 09:54:58 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:54:58.340359 | orchestrator | 2025-09-20 09:54:58 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state STARTED 2025-09-20 09:54:58.340504 | orchestrator | 2025-09-20 09:54:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:01.492608 | orchestrator | 2025-09-20 09:55:01 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:01.492929 | orchestrator | 2025-09-20 09:55:01 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:01.494521 | orchestrator | 2025-09-20 09:55:01 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:01.496633 | orchestrator | 2025-09-20 09:55:01 | INFO  | Task 1c4ee036-7806-4558-a320-9178c8979a8c is in state SUCCESS 2025-09-20 09:55:01.496806 | orchestrator | 2025-09-20 09:55:01.498666 | orchestrator | 2025-09-20 09:55:01.498700 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:55:01.498712 | orchestrator | 2025-09-20 09:55:01.498723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:55:01.498761 | orchestrator | Saturday 20 September 2025 09:51:23 +0000 (0:00:00.354) 0:00:00.354 **** 2025-09-20 09:55:01.498773 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:55:01.498785 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:55:01.498796 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:55:01.498806 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:55:01.498817 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:55:01.498828 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:55:01.498838 | orchestrator | 2025-09-20 09:55:01.498849 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:55:01.498860 | orchestrator | Saturday 20 September 2025 09:51:24 +0000 (0:00:00.635) 0:00:00.990 **** 2025-09-20 09:55:01.498931 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-20 09:55:01.498961 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-20 09:55:01.498973 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-20 09:55:01.498983 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-20 09:55:01.498994 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-20 09:55:01.499005 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-20 09:55:01.499051 | orchestrator | 2025-09-20 09:55:01.499062 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-20 09:55:01.499073 | orchestrator | 2025-09-20 09:55:01.499083 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 09:55:01.499094 | orchestrator | Saturday 20 September 2025 09:51:24 +0000 (0:00:00.618) 0:00:01.608 **** 2025-09-20 09:55:01.499105 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:55:01.499118 | orchestrator | 2025-09-20 09:55:01.499129 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-20 09:55:01.499139 | orchestrator | Saturday 20 September 2025 09:51:26 +0000 (0:00:01.120) 0:00:02.729 **** 2025-09-20 09:55:01.499151 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-20 09:55:01.499161 | orchestrator | 2025-09-20 09:55:01.499172 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-20 09:55:01.499183 | orchestrator | Saturday 20 September 2025 09:51:29 +0000 (0:00:03.698) 0:00:06.427 **** 2025-09-20 09:55:01.499194 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-20 09:55:01.499205 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-20 09:55:01.499216 | orchestrator | 2025-09-20 09:55:01.499249 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-20 09:55:01.499264 | orchestrator | Saturday 20 September 2025 09:51:36 +0000 (0:00:06.749) 0:00:13.177 **** 2025-09-20 09:55:01.499277 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 09:55:01.499290 | orchestrator | 2025-09-20 09:55:01.499302 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-20 09:55:01.499314 | orchestrator | Saturday 20 September 2025 09:51:39 +0000 (0:00:03.333) 0:00:16.511 **** 2025-09-20 09:55:01.499326 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:55:01.499340 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-20 09:55:01.499353 | orchestrator | 2025-09-20 09:55:01.499366 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-20 09:55:01.499377 | orchestrator | Saturday 20 September 2025 09:51:44 +0000 (0:00:04.497) 0:00:21.008 **** 2025-09-20 09:55:01.499390 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:55:01.499402 | orchestrator | 2025-09-20 09:55:01.499415 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-20 09:55:01.499427 | orchestrator | Saturday 20 September 2025 09:51:47 +0000 (0:00:03.319) 0:00:24.327 **** 2025-09-20 09:55:01.499450 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-20 09:55:01.499462 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-20 09:55:01.499475 | orchestrator | 2025-09-20 09:55:01.499487 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-20 09:55:01.499500 | orchestrator | Saturday 20 September 2025 09:51:55 +0000 (0:00:08.125) 0:00:32.453 **** 2025-09-20 09:55:01.499517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.499656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.499676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.499688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.499856 | orchestrator | 2025-09-20 09:55:01.499872 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 09:55:01.499884 | orchestrator | Saturday 20 September 2025 09:51:58 +0000 (0:00:02.612) 0:00:35.066 **** 2025-09-20 09:55:01.499895 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.499906 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.499917 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.499928 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.499939 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.499950 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.499960 | orchestrator | 2025-09-20 09:55:01.499972 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 09:55:01.499982 | orchestrator | Saturday 20 September 2025 09:51:58 +0000 (0:00:00.532) 0:00:35.598 **** 2025-09-20 09:55:01.499993 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.500004 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.500028 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.500044 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:55:01.500056 | orchestrator | 2025-09-20 09:55:01.500067 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-20 09:55:01.500078 | orchestrator | Saturday 20 September 2025 09:51:59 +0000 (0:00:00.981) 0:00:36.580 **** 2025-09-20 09:55:01.500088 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-20 09:55:01.500099 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-20 09:55:01.500110 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-20 09:55:01.500121 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-20 09:55:01.500132 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-20 09:55:01.500143 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-20 09:55:01.500153 | orchestrator | 2025-09-20 09:55:01.500164 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-20 09:55:01.500175 | orchestrator | Saturday 20 September 2025 09:52:01 +0000 (0:00:01.844) 0:00:38.424 **** 2025-09-20 09:55:01.500187 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 09:55:01.500206 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 09:55:01.500218 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 09:55:01.500268 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 09:55:01.500286 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 09:55:01.500305 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-20 09:55:01.500316 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 09:55:01.500328 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 09:55:01.500351 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 09:55:01.500363 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 09:55:01.500382 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 09:55:01.500394 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-20 09:55:01.500405 | orchestrator | 2025-09-20 09:55:01.500416 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-20 09:55:01.500427 | orchestrator | Saturday 20 September 2025 09:52:06 +0000 (0:00:04.371) 0:00:42.796 **** 2025-09-20 09:55:01.500438 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:55:01.500449 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:55:01.500460 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-20 09:55:01.500471 | orchestrator | 2025-09-20 09:55:01.500482 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-20 09:55:01.500493 | orchestrator | Saturday 20 September 2025 09:52:07 +0000 (0:00:01.791) 0:00:44.588 **** 2025-09-20 09:55:01.500505 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-20 09:55:01.500524 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-20 09:55:01.500542 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-20 09:55:01.500560 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 09:55:01.500578 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 09:55:01.500605 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-20 09:55:01.500617 | orchestrator | 2025-09-20 09:55:01.500628 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-20 09:55:01.500639 | orchestrator | Saturday 20 September 2025 09:52:11 +0000 (0:00:03.682) 0:00:48.270 **** 2025-09-20 09:55:01.500650 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-20 09:55:01.500661 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-20 09:55:01.500672 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-20 09:55:01.500683 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-20 09:55:01.500694 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-20 09:55:01.500704 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-20 09:55:01.500723 | orchestrator | 2025-09-20 09:55:01.500734 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-20 09:55:01.500756 | orchestrator | Saturday 20 September 2025 09:52:12 +0000 (0:00:00.939) 0:00:49.210 **** 2025-09-20 09:55:01.500767 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.500778 | orchestrator | 2025-09-20 09:55:01.500788 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-20 09:55:01.500799 | orchestrator | Saturday 20 September 2025 09:52:12 +0000 (0:00:00.169) 0:00:49.380 **** 2025-09-20 09:55:01.500810 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.500821 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.500831 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.500842 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.500852 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.500863 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.500874 | orchestrator | 2025-09-20 09:55:01.500885 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 09:55:01.500895 | orchestrator | Saturday 20 September 2025 09:52:13 +0000 (0:00:00.672) 0:00:50.052 **** 2025-09-20 09:55:01.500907 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:55:01.500919 | orchestrator | 2025-09-20 09:55:01.500930 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-20 09:55:01.500941 | orchestrator | Saturday 20 September 2025 09:52:14 +0000 (0:00:01.104) 0:00:51.156 **** 2025-09-20 09:55:01.500952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.500964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.500982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.501061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.501699 | orchestrator | 2025-09-20 09:55:01.501711 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-20 09:55:01.501722 | orchestrator | Saturday 20 September 2025 09:52:17 +0000 (0:00:03.235) 0:00:54.392 **** 2025-09-20 09:55:01.501733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.501752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501774 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.501790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.501802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501813 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.501825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.501836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501847 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.501858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501894 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.501911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501934 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.501945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.501974 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.501985 | orchestrator | 2025-09-20 09:55:01.501996 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-20 09:55:01.502007 | orchestrator | Saturday 20 September 2025 09:52:19 +0000 (0:00:01.913) 0:00:56.305 **** 2025-09-20 09:55:01.502084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.502100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502112 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.502123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.502135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502190 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.502202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.502240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502253 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.502270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502293 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.502304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502334 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.502351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.502380 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.502391 | orchestrator | 2025-09-20 09:55:01.502402 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-20 09:55:01.502413 | orchestrator | Saturday 20 September 2025 09:52:21 +0000 (0:00:01.954) 0:00:58.260 **** 2025-09-20 09:55:01.502424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.502436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.502473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.502492 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502619 | orchestrator | 2025-09-20 09:55:01.502636 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-20 09:55:01.502647 | orchestrator | Saturday 20 September 2025 09:52:24 +0000 (0:00:03.245) 0:01:01.505 **** 2025-09-20 09:55:01.502658 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-20 09:55:01.502670 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.502681 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-20 09:55:01.502692 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.502702 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-20 09:55:01.502713 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-20 09:55:01.502724 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.502735 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-20 09:55:01.502746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-20 09:55:01.502756 | orchestrator | 2025-09-20 09:55:01.502767 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-20 09:55:01.502777 | orchestrator | Saturday 20 September 2025 09:52:27 +0000 (0:00:02.482) 0:01:03.987 **** 2025-09-20 09:55:01.502788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.502810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.502841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.502853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.502976 | orchestrator | 2025-09-20 09:55:01.502987 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-20 09:55:01.502998 | orchestrator | Saturday 20 September 2025 09:52:38 +0000 (0:00:11.251) 0:01:15.238 **** 2025-09-20 09:55:01.503013 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.503025 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.503036 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.503047 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:55:01.503058 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:55:01.503069 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:55:01.503080 | orchestrator | 2025-09-20 09:55:01.503091 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-20 09:55:01.503102 | orchestrator | Saturday 20 September 2025 09:52:40 +0000 (0:00:02.215) 0:01:17.454 **** 2025-09-20 09:55:01.503130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.503149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503161 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.503172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.503184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503195 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.503213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-20 09:55:01.503244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503265 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.503277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503299 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.503311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503388 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.503400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-20 09:55:01.503412 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.503423 | orchestrator | 2025-09-20 09:55:01.503434 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-20 09:55:01.503445 | orchestrator | Saturday 20 September 2025 09:52:42 +0000 (0:00:01.922) 0:01:19.376 **** 2025-09-20 09:55:01.503456 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.503466 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.503477 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.503488 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.503499 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.503510 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.503520 | orchestrator | 2025-09-20 09:55:01.503531 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-20 09:55:01.503542 | orchestrator | Saturday 20 September 2025 09:52:43 +0000 (0:00:01.159) 0:01:20.536 **** 2025-09-20 09:55:01.503553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.503584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.503616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-20 09:55:01.503629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503652 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503718 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-20 09:55:01.503741 | orchestrator | 2025-09-20 09:55:01.503752 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-20 09:55:01.503763 | orchestrator | Saturday 20 September 2025 09:52:46 +0000 (0:00:02.593) 0:01:23.129 **** 2025-09-20 09:55:01.503775 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.503786 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:55:01.503796 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:55:01.503807 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:55:01.503818 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:55:01.503829 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:55:01.503839 | orchestrator | 2025-09-20 09:55:01.503850 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-20 09:55:01.503861 | orchestrator | Saturday 20 September 2025 09:52:47 +0000 (0:00:00.684) 0:01:23.813 **** 2025-09-20 09:55:01.503878 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:55:01.503889 | orchestrator | 2025-09-20 09:55:01.503899 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-20 09:55:01.503910 | orchestrator | Saturday 20 September 2025 09:52:49 +0000 (0:00:02.529) 0:01:26.343 **** 2025-09-20 09:55:01.503921 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:55:01.503932 | orchestrator | 2025-09-20 09:55:01.503942 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-20 09:55:01.503953 | orchestrator | Saturday 20 September 2025 09:52:52 +0000 (0:00:02.442) 0:01:28.786 **** 2025-09-20 09:55:01.503964 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:55:01.503975 | orchestrator | 2025-09-20 09:55:01.503986 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 09:55:01.503996 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:17.336) 0:01:46.122 **** 2025-09-20 09:55:01.504007 | orchestrator | 2025-09-20 09:55:01.504023 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 09:55:01.504035 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:00.060) 0:01:46.183 **** 2025-09-20 09:55:01.504046 | orchestrator | 2025-09-20 09:55:01.504057 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 09:55:01.504067 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:00.062) 0:01:46.245 **** 2025-09-20 09:55:01.504078 | orchestrator | 2025-09-20 09:55:01.504089 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 09:55:01.504100 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:00.075) 0:01:46.321 **** 2025-09-20 09:55:01.504110 | orchestrator | 2025-09-20 09:55:01.504121 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 09:55:01.504132 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:00.061) 0:01:46.383 **** 2025-09-20 09:55:01.504142 | orchestrator | 2025-09-20 09:55:01.504158 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-20 09:55:01.504169 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:00.059) 0:01:46.442 **** 2025-09-20 09:55:01.504180 | orchestrator | 2025-09-20 09:55:01.504191 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-20 09:55:01.504202 | orchestrator | Saturday 20 September 2025 09:53:09 +0000 (0:00:00.063) 0:01:46.505 **** 2025-09-20 09:55:01.504213 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:55:01.504275 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:55:01.504288 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:55:01.504299 | orchestrator | 2025-09-20 09:55:01.504310 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-20 09:55:01.504321 | orchestrator | Saturday 20 September 2025 09:53:37 +0000 (0:00:27.759) 0:02:14.265 **** 2025-09-20 09:55:01.504331 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:55:01.504342 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:55:01.504353 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:55:01.504364 | orchestrator | 2025-09-20 09:55:01.504375 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-20 09:55:01.504385 | orchestrator | Saturday 20 September 2025 09:53:43 +0000 (0:00:06.202) 0:02:20.467 **** 2025-09-20 09:55:01.504396 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:55:01.504407 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:55:01.504418 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:55:01.504428 | orchestrator | 2025-09-20 09:55:01.504439 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-20 09:55:01.504450 | orchestrator | Saturday 20 September 2025 09:54:53 +0000 (0:01:09.920) 0:03:30.387 **** 2025-09-20 09:55:01.504461 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:55:01.504471 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:55:01.504481 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:55:01.504491 | orchestrator | 2025-09-20 09:55:01.504501 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-20 09:55:01.504530 | orchestrator | Saturday 20 September 2025 09:54:59 +0000 (0:00:05.448) 0:03:35.835 **** 2025-09-20 09:55:01.504540 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:55:01.504550 | orchestrator | 2025-09-20 09:55:01.504559 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:55:01.504569 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-20 09:55:01.504579 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 09:55:01.504589 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 09:55:01.504599 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 09:55:01.504608 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 09:55:01.504618 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-20 09:55:01.504627 | orchestrator | 2025-09-20 09:55:01.504637 | orchestrator | 2025-09-20 09:55:01.504647 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:55:01.504657 | orchestrator | Saturday 20 September 2025 09:55:00 +0000 (0:00:01.250) 0:03:37.086 **** 2025-09-20 09:55:01.504666 | orchestrator | =============================================================================== 2025-09-20 09:55:01.504676 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 69.92s 2025-09-20 09:55:01.504685 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.76s 2025-09-20 09:55:01.504695 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.34s 2025-09-20 09:55:01.504705 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.25s 2025-09-20 09:55:01.504714 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.13s 2025-09-20 09:55:01.504724 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.75s 2025-09-20 09:55:01.504733 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.20s 2025-09-20 09:55:01.504743 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.45s 2025-09-20 09:55:01.504797 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.50s 2025-09-20 09:55:01.504809 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.37s 2025-09-20 09:55:01.504818 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.70s 2025-09-20 09:55:01.504828 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.68s 2025-09-20 09:55:01.504837 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.33s 2025-09-20 09:55:01.504847 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.32s 2025-09-20 09:55:01.504857 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.25s 2025-09-20 09:55:01.504866 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.24s 2025-09-20 09:55:01.504880 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.61s 2025-09-20 09:55:01.504890 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.59s 2025-09-20 09:55:01.504899 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.53s 2025-09-20 09:55:01.504909 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.48s 2025-09-20 09:55:01.504925 | orchestrator | 2025-09-20 09:55:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:04.537163 | orchestrator | 2025-09-20 09:55:04 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:04.537626 | orchestrator | 2025-09-20 09:55:04 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:04.539516 | orchestrator | 2025-09-20 09:55:04 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:04.540096 | orchestrator | 2025-09-20 09:55:04 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:04.540192 | orchestrator | 2025-09-20 09:55:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:07.569121 | orchestrator | 2025-09-20 09:55:07 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:07.569713 | orchestrator | 2025-09-20 09:55:07 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:07.570589 | orchestrator | 2025-09-20 09:55:07 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:07.571663 | orchestrator | 2025-09-20 09:55:07 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:07.571686 | orchestrator | 2025-09-20 09:55:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:10.622529 | orchestrator | 2025-09-20 09:55:10 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:10.623274 | orchestrator | 2025-09-20 09:55:10 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:10.624402 | orchestrator | 2025-09-20 09:55:10 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:10.625442 | orchestrator | 2025-09-20 09:55:10 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:10.625464 | orchestrator | 2025-09-20 09:55:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:13.661210 | orchestrator | 2025-09-20 09:55:13 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:13.663445 | orchestrator | 2025-09-20 09:55:13 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:13.665498 | orchestrator | 2025-09-20 09:55:13 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:13.667426 | orchestrator | 2025-09-20 09:55:13 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:13.667465 | orchestrator | 2025-09-20 09:55:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:16.707728 | orchestrator | 2025-09-20 09:55:16 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:16.709263 | orchestrator | 2025-09-20 09:55:16 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:16.710676 | orchestrator | 2025-09-20 09:55:16 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:16.711896 | orchestrator | 2025-09-20 09:55:16 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:16.711918 | orchestrator | 2025-09-20 09:55:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:19.743903 | orchestrator | 2025-09-20 09:55:19 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:19.744009 | orchestrator | 2025-09-20 09:55:19 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:19.744586 | orchestrator | 2025-09-20 09:55:19 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:19.745878 | orchestrator | 2025-09-20 09:55:19 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:19.745901 | orchestrator | 2025-09-20 09:55:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:22.775395 | orchestrator | 2025-09-20 09:55:22 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:22.776197 | orchestrator | 2025-09-20 09:55:22 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:22.776666 | orchestrator | 2025-09-20 09:55:22 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:22.777389 | orchestrator | 2025-09-20 09:55:22 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:22.778123 | orchestrator | 2025-09-20 09:55:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:25.807021 | orchestrator | 2025-09-20 09:55:25 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:25.807111 | orchestrator | 2025-09-20 09:55:25 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:25.808381 | orchestrator | 2025-09-20 09:55:25 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:25.809187 | orchestrator | 2025-09-20 09:55:25 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:25.809210 | orchestrator | 2025-09-20 09:55:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:28.844220 | orchestrator | 2025-09-20 09:55:28 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:28.847270 | orchestrator | 2025-09-20 09:55:28 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:28.847928 | orchestrator | 2025-09-20 09:55:28 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:28.848911 | orchestrator | 2025-09-20 09:55:28 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:28.848942 | orchestrator | 2025-09-20 09:55:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:31.875879 | orchestrator | 2025-09-20 09:55:31 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:31.876259 | orchestrator | 2025-09-20 09:55:31 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:31.876784 | orchestrator | 2025-09-20 09:55:31 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:31.877489 | orchestrator | 2025-09-20 09:55:31 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:31.877511 | orchestrator | 2025-09-20 09:55:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:34.901617 | orchestrator | 2025-09-20 09:55:34 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:34.901812 | orchestrator | 2025-09-20 09:55:34 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:34.903277 | orchestrator | 2025-09-20 09:55:34 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:34.903947 | orchestrator | 2025-09-20 09:55:34 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:34.903974 | orchestrator | 2025-09-20 09:55:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:37.927297 | orchestrator | 2025-09-20 09:55:37 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:37.927682 | orchestrator | 2025-09-20 09:55:37 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:37.929275 | orchestrator | 2025-09-20 09:55:37 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:37.929797 | orchestrator | 2025-09-20 09:55:37 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:37.929936 | orchestrator | 2025-09-20 09:55:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:40.959373 | orchestrator | 2025-09-20 09:55:40 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:40.959477 | orchestrator | 2025-09-20 09:55:40 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:40.960176 | orchestrator | 2025-09-20 09:55:40 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:40.961105 | orchestrator | 2025-09-20 09:55:40 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:40.961126 | orchestrator | 2025-09-20 09:55:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:43.992860 | orchestrator | 2025-09-20 09:55:43 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:43.994847 | orchestrator | 2025-09-20 09:55:43 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:43.995414 | orchestrator | 2025-09-20 09:55:43 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:43.996532 | orchestrator | 2025-09-20 09:55:43 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:43.996641 | orchestrator | 2025-09-20 09:55:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:47.033121 | orchestrator | 2025-09-20 09:55:47 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:47.034269 | orchestrator | 2025-09-20 09:55:47 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:47.037515 | orchestrator | 2025-09-20 09:55:47 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:47.039546 | orchestrator | 2025-09-20 09:55:47 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:47.039581 | orchestrator | 2025-09-20 09:55:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:50.063895 | orchestrator | 2025-09-20 09:55:50 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:50.064093 | orchestrator | 2025-09-20 09:55:50 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:50.064740 | orchestrator | 2025-09-20 09:55:50 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:50.065446 | orchestrator | 2025-09-20 09:55:50 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:50.065470 | orchestrator | 2025-09-20 09:55:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:53.088894 | orchestrator | 2025-09-20 09:55:53 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:53.089010 | orchestrator | 2025-09-20 09:55:53 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:53.089392 | orchestrator | 2025-09-20 09:55:53 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:53.089980 | orchestrator | 2025-09-20 09:55:53 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:53.090557 | orchestrator | 2025-09-20 09:55:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:56.113008 | orchestrator | 2025-09-20 09:55:56 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:56.113105 | orchestrator | 2025-09-20 09:55:56 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:56.113563 | orchestrator | 2025-09-20 09:55:56 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:56.113970 | orchestrator | 2025-09-20 09:55:56 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:56.113991 | orchestrator | 2025-09-20 09:55:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:55:59.139156 | orchestrator | 2025-09-20 09:55:59 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:55:59.140792 | orchestrator | 2025-09-20 09:55:59 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:55:59.141511 | orchestrator | 2025-09-20 09:55:59 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:55:59.142333 | orchestrator | 2025-09-20 09:55:59 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:55:59.142358 | orchestrator | 2025-09-20 09:55:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:02.170627 | orchestrator | 2025-09-20 09:56:02 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:02.170786 | orchestrator | 2025-09-20 09:56:02 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:02.171655 | orchestrator | 2025-09-20 09:56:02 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:02.172373 | orchestrator | 2025-09-20 09:56:02 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:02.172395 | orchestrator | 2025-09-20 09:56:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:05.194215 | orchestrator | 2025-09-20 09:56:05 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:05.194611 | orchestrator | 2025-09-20 09:56:05 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:05.196361 | orchestrator | 2025-09-20 09:56:05 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:05.196948 | orchestrator | 2025-09-20 09:56:05 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:05.196972 | orchestrator | 2025-09-20 09:56:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:08.222502 | orchestrator | 2025-09-20 09:56:08 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:08.222638 | orchestrator | 2025-09-20 09:56:08 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:08.222714 | orchestrator | 2025-09-20 09:56:08 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:08.224275 | orchestrator | 2025-09-20 09:56:08 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:08.224299 | orchestrator | 2025-09-20 09:56:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:11.287791 | orchestrator | 2025-09-20 09:56:11 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:11.287996 | orchestrator | 2025-09-20 09:56:11 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:11.288675 | orchestrator | 2025-09-20 09:56:11 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:11.290683 | orchestrator | 2025-09-20 09:56:11 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:11.290708 | orchestrator | 2025-09-20 09:56:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:14.324699 | orchestrator | 2025-09-20 09:56:14 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:14.324799 | orchestrator | 2025-09-20 09:56:14 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:14.332066 | orchestrator | 2025-09-20 09:56:14 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:14.344609 | orchestrator | 2025-09-20 09:56:14 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:14.344670 | orchestrator | 2025-09-20 09:56:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:17.416194 | orchestrator | 2025-09-20 09:56:17 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:17.416324 | orchestrator | 2025-09-20 09:56:17 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:17.416337 | orchestrator | 2025-09-20 09:56:17 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:17.416348 | orchestrator | 2025-09-20 09:56:17 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:17.416358 | orchestrator | 2025-09-20 09:56:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:20.399356 | orchestrator | 2025-09-20 09:56:20 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:20.399449 | orchestrator | 2025-09-20 09:56:20 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:20.399867 | orchestrator | 2025-09-20 09:56:20 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:20.400495 | orchestrator | 2025-09-20 09:56:20 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:20.400516 | orchestrator | 2025-09-20 09:56:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:23.431048 | orchestrator | 2025-09-20 09:56:23 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:23.431405 | orchestrator | 2025-09-20 09:56:23 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:23.432070 | orchestrator | 2025-09-20 09:56:23 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:23.432774 | orchestrator | 2025-09-20 09:56:23 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:23.432796 | orchestrator | 2025-09-20 09:56:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:26.457436 | orchestrator | 2025-09-20 09:56:26 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:26.457544 | orchestrator | 2025-09-20 09:56:26 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:26.457828 | orchestrator | 2025-09-20 09:56:26 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:26.458293 | orchestrator | 2025-09-20 09:56:26 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:26.458336 | orchestrator | 2025-09-20 09:56:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:29.488063 | orchestrator | 2025-09-20 09:56:29 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:29.488275 | orchestrator | 2025-09-20 09:56:29 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:29.488735 | orchestrator | 2025-09-20 09:56:29 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:29.489388 | orchestrator | 2025-09-20 09:56:29 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:29.489408 | orchestrator | 2025-09-20 09:56:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:32.518388 | orchestrator | 2025-09-20 09:56:32 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:32.518545 | orchestrator | 2025-09-20 09:56:32 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:32.519002 | orchestrator | 2025-09-20 09:56:32 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:32.519640 | orchestrator | 2025-09-20 09:56:32 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:32.519661 | orchestrator | 2025-09-20 09:56:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:35.544445 | orchestrator | 2025-09-20 09:56:35 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:35.544571 | orchestrator | 2025-09-20 09:56:35 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:35.545096 | orchestrator | 2025-09-20 09:56:35 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:35.545684 | orchestrator | 2025-09-20 09:56:35 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:35.545720 | orchestrator | 2025-09-20 09:56:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:38.572319 | orchestrator | 2025-09-20 09:56:38 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:38.576160 | orchestrator | 2025-09-20 09:56:38 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:38.576217 | orchestrator | 2025-09-20 09:56:38 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:38.576277 | orchestrator | 2025-09-20 09:56:38 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state STARTED 2025-09-20 09:56:38.576301 | orchestrator | 2025-09-20 09:56:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:41.593333 | orchestrator | 2025-09-20 09:56:41 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:41.593568 | orchestrator | 2025-09-20 09:56:41 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:41.594010 | orchestrator | 2025-09-20 09:56:41 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:41.595179 | orchestrator | 2025-09-20 09:56:41 | INFO  | Task 4ef1abf3-2c02-492d-b127-404ee85e791a is in state SUCCESS 2025-09-20 09:56:41.597168 | orchestrator | 2025-09-20 09:56:41.597223 | orchestrator | 2025-09-20 09:56:41.597262 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:56:41.597276 | orchestrator | 2025-09-20 09:56:41.597287 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:56:41.597299 | orchestrator | Saturday 20 September 2025 09:54:37 +0000 (0:00:00.262) 0:00:00.262 **** 2025-09-20 09:56:41.597310 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:56:41.597324 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:56:41.597343 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:56:41.597361 | orchestrator | 2025-09-20 09:56:41.597378 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:56:41.597438 | orchestrator | Saturday 20 September 2025 09:54:37 +0000 (0:00:00.305) 0:00:00.568 **** 2025-09-20 09:56:41.597462 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-20 09:56:41.597480 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-20 09:56:41.597496 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-20 09:56:41.597515 | orchestrator | 2025-09-20 09:56:41.597534 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-20 09:56:41.597551 | orchestrator | 2025-09-20 09:56:41.597569 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-20 09:56:41.597587 | orchestrator | Saturday 20 September 2025 09:54:38 +0000 (0:00:00.456) 0:00:01.024 **** 2025-09-20 09:56:41.597605 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:56:41.597625 | orchestrator | 2025-09-20 09:56:41.597644 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-20 09:56:41.597661 | orchestrator | Saturday 20 September 2025 09:54:38 +0000 (0:00:00.544) 0:00:01.569 **** 2025-09-20 09:56:41.597699 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-20 09:56:41.597719 | orchestrator | 2025-09-20 09:56:41.597738 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-20 09:56:41.597757 | orchestrator | Saturday 20 September 2025 09:54:42 +0000 (0:00:03.362) 0:00:04.931 **** 2025-09-20 09:56:41.597775 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-20 09:56:41.597793 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-20 09:56:41.597811 | orchestrator | 2025-09-20 09:56:41.597828 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-20 09:56:41.597846 | orchestrator | Saturday 20 September 2025 09:54:49 +0000 (0:00:06.981) 0:00:11.913 **** 2025-09-20 09:56:41.597863 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 09:56:41.597883 | orchestrator | 2025-09-20 09:56:41.597901 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-20 09:56:41.597920 | orchestrator | Saturday 20 September 2025 09:54:52 +0000 (0:00:03.206) 0:00:15.120 **** 2025-09-20 09:56:41.597938 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:56:41.597952 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-20 09:56:41.597965 | orchestrator | 2025-09-20 09:56:41.597977 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-20 09:56:41.597989 | orchestrator | Saturday 20 September 2025 09:54:56 +0000 (0:00:03.914) 0:00:19.035 **** 2025-09-20 09:56:41.598001 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:56:41.598014 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-20 09:56:41.598077 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-20 09:56:41.598089 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-20 09:56:41.598100 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-20 09:56:41.598111 | orchestrator | 2025-09-20 09:56:41.598121 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-20 09:56:41.598132 | orchestrator | Saturday 20 September 2025 09:55:12 +0000 (0:00:16.084) 0:00:35.120 **** 2025-09-20 09:56:41.598143 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-20 09:56:41.598154 | orchestrator | 2025-09-20 09:56:41.598165 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-20 09:56:41.598175 | orchestrator | Saturday 20 September 2025 09:55:16 +0000 (0:00:04.269) 0:00:39.389 **** 2025-09-20 09:56:41.598191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.598271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.598305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.598326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598475 | orchestrator | 2025-09-20 09:56:41.598495 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-20 09:56:41.598513 | orchestrator | Saturday 20 September 2025 09:55:18 +0000 (0:00:02.082) 0:00:41.472 **** 2025-09-20 09:56:41.598533 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-20 09:56:41.598551 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-20 09:56:41.598562 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-20 09:56:41.598573 | orchestrator | 2025-09-20 09:56:41.598583 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-20 09:56:41.598594 | orchestrator | Saturday 20 September 2025 09:55:19 +0000 (0:00:01.033) 0:00:42.505 **** 2025-09-20 09:56:41.598605 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.598615 | orchestrator | 2025-09-20 09:56:41.598626 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-20 09:56:41.598637 | orchestrator | Saturday 20 September 2025 09:55:20 +0000 (0:00:00.123) 0:00:42.629 **** 2025-09-20 09:56:41.598647 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.598658 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:56:41.598668 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:56:41.598679 | orchestrator | 2025-09-20 09:56:41.598689 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-20 09:56:41.598700 | orchestrator | Saturday 20 September 2025 09:55:20 +0000 (0:00:00.609) 0:00:43.239 **** 2025-09-20 09:56:41.598711 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:56:41.598731 | orchestrator | 2025-09-20 09:56:41.598742 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-20 09:56:41.598753 | orchestrator | Saturday 20 September 2025 09:55:21 +0000 (0:00:01.055) 0:00:44.294 **** 2025-09-20 09:56:41.598764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.598785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.598833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.598848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.598935 | orchestrator | 2025-09-20 09:56:41.598946 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-20 09:56:41.598957 | orchestrator | Saturday 20 September 2025 09:55:25 +0000 (0:00:03.689) 0:00:47.984 **** 2025-09-20 09:56:41.598973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.599005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599029 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.599048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.599060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599087 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:56:41.599099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.599117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599139 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:56:41.599150 | orchestrator | 2025-09-20 09:56:41.599161 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-20 09:56:41.599172 | orchestrator | Saturday 20 September 2025 09:55:27 +0000 (0:00:01.736) 0:00:49.721 **** 2025-09-20 09:56:41.599191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.599203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599360 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.599388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.599408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599458 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:56:41.599491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.599520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.599603 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:56:41.599618 | orchestrator | 2025-09-20 09:56:41.599634 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-20 09:56:41.599652 | orchestrator | Saturday 20 September 2025 09:55:28 +0000 (0:00:01.589) 0:00:51.311 **** 2025-09-20 09:56:41.599668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.599697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.599713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.599730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.599748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.599758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.599768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.599784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.599795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.599805 | orchestrator | 2025-09-20 09:56:41.599814 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-20 09:56:41.599824 | orchestrator | Saturday 20 September 2025 09:55:32 +0000 (0:00:04.159) 0:00:55.470 **** 2025-09-20 09:56:41.599841 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.599850 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:56:41.599860 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:56:41.599869 | orchestrator | 2025-09-20 09:56:41.599879 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-20 09:56:41.599888 | orchestrator | Saturday 20 September 2025 09:55:36 +0000 (0:00:03.744) 0:00:59.214 **** 2025-09-20 09:56:41.599898 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:56:41.599908 | orchestrator | 2025-09-20 09:56:41.599917 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-20 09:56:41.599931 | orchestrator | Saturday 20 September 2025 09:55:37 +0000 (0:00:00.827) 0:01:00.042 **** 2025-09-20 09:56:41.599940 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.599950 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:56:41.599959 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:56:41.599969 | orchestrator | 2025-09-20 09:56:41.599978 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-20 09:56:41.599988 | orchestrator | Saturday 20 September 2025 09:55:38 +0000 (0:00:00.602) 0:01:00.644 **** 2025-09-20 09:56:41.599998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.600008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.600025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.600042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600107 | orchestrator | 2025-09-20 09:56:41.600117 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-20 09:56:41.600127 | orchestrator | Saturday 20 September 2025 09:55:49 +0000 (0:00:11.509) 0:01:12.154 **** 2025-09-20 09:56:41.600150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.600167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.600185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.600201 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.600218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.600265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.600295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.600323 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:56:41.600347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-20 09:56:41.600365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.600382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:56:41.600398 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:56:41.600413 | orchestrator | 2025-09-20 09:56:41.600428 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-20 09:56:41.600444 | orchestrator | Saturday 20 September 2025 09:55:51 +0000 (0:00:01.576) 0:01:13.730 **** 2025-09-20 09:56:41.600461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.600498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.600527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-20 09:56:41.600545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:56:41.600698 | orchestrator | 2025-09-20 09:56:41.600714 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-20 09:56:41.600730 | orchestrator | Saturday 20 September 2025 09:55:54 +0000 (0:00:03.254) 0:01:16.985 **** 2025-09-20 09:56:41.600746 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:56:41.600761 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:56:41.600785 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:56:41.600801 | orchestrator | 2025-09-20 09:56:41.600817 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-20 09:56:41.600833 | orchestrator | Saturday 20 September 2025 09:55:54 +0000 (0:00:00.211) 0:01:17.196 **** 2025-09-20 09:56:41.600848 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.600864 | orchestrator | 2025-09-20 09:56:41.600880 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-20 09:56:41.600896 | orchestrator | Saturday 20 September 2025 09:55:56 +0000 (0:00:02.035) 0:01:19.231 **** 2025-09-20 09:56:41.600912 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.600928 | orchestrator | 2025-09-20 09:56:41.600945 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-20 09:56:41.600960 | orchestrator | Saturday 20 September 2025 09:55:58 +0000 (0:00:02.054) 0:01:21.286 **** 2025-09-20 09:56:41.600976 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.600992 | orchestrator | 2025-09-20 09:56:41.601007 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-20 09:56:41.601022 | orchestrator | Saturday 20 September 2025 09:56:11 +0000 (0:00:12.602) 0:01:33.888 **** 2025-09-20 09:56:41.601038 | orchestrator | 2025-09-20 09:56:41.601053 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-20 09:56:41.601069 | orchestrator | Saturday 20 September 2025 09:56:11 +0000 (0:00:00.077) 0:01:33.966 **** 2025-09-20 09:56:41.601086 | orchestrator | 2025-09-20 09:56:41.601101 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-20 09:56:41.601117 | orchestrator | Saturday 20 September 2025 09:56:11 +0000 (0:00:00.072) 0:01:34.039 **** 2025-09-20 09:56:41.601134 | orchestrator | 2025-09-20 09:56:41.601148 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-20 09:56:41.601165 | orchestrator | Saturday 20 September 2025 09:56:11 +0000 (0:00:00.080) 0:01:34.119 **** 2025-09-20 09:56:41.601195 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:56:41.601212 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:56:41.601227 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.601271 | orchestrator | 2025-09-20 09:56:41.601288 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-20 09:56:41.601305 | orchestrator | Saturday 20 September 2025 09:56:20 +0000 (0:00:09.215) 0:01:43.335 **** 2025-09-20 09:56:41.601322 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:56:41.601338 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:56:41.601359 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.601380 | orchestrator | 2025-09-20 09:56:41.601395 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-20 09:56:41.601409 | orchestrator | Saturday 20 September 2025 09:56:29 +0000 (0:00:08.751) 0:01:52.087 **** 2025-09-20 09:56:41.601424 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:56:41.601439 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:56:41.601454 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:56:41.601470 | orchestrator | 2025-09-20 09:56:41.601486 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:56:41.601503 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 09:56:41.601521 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:56:41.601536 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:56:41.601553 | orchestrator | 2025-09-20 09:56:41.601577 | orchestrator | 2025-09-20 09:56:41.601595 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:56:41.601611 | orchestrator | Saturday 20 September 2025 09:56:39 +0000 (0:00:09.629) 0:02:01.716 **** 2025-09-20 09:56:41.601628 | orchestrator | =============================================================================== 2025-09-20 09:56:41.601644 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.08s 2025-09-20 09:56:41.601685 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.60s 2025-09-20 09:56:41.601709 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.51s 2025-09-20 09:56:41.601725 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.63s 2025-09-20 09:56:41.601740 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.22s 2025-09-20 09:56:41.601756 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.75s 2025-09-20 09:56:41.601778 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.98s 2025-09-20 09:56:41.601799 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.27s 2025-09-20 09:56:41.601816 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.16s 2025-09-20 09:56:41.601833 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.92s 2025-09-20 09:56:41.601860 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.74s 2025-09-20 09:56:41.601877 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.69s 2025-09-20 09:56:41.601894 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.36s 2025-09-20 09:56:41.601910 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.25s 2025-09-20 09:56:41.601925 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2025-09-20 09:56:41.601941 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.08s 2025-09-20 09:56:41.601957 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.05s 2025-09-20 09:56:41.601987 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.04s 2025-09-20 09:56:41.602070 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.74s 2025-09-20 09:56:41.602095 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.59s 2025-09-20 09:56:41.602112 | orchestrator | 2025-09-20 09:56:41 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:41.602129 | orchestrator | 2025-09-20 09:56:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:44.613912 | orchestrator | 2025-09-20 09:56:44 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:44.614068 | orchestrator | 2025-09-20 09:56:44 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:44.614506 | orchestrator | 2025-09-20 09:56:44 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:44.615398 | orchestrator | 2025-09-20 09:56:44 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:44.615473 | orchestrator | 2025-09-20 09:56:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:47.640758 | orchestrator | 2025-09-20 09:56:47 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:47.643113 | orchestrator | 2025-09-20 09:56:47 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:47.643149 | orchestrator | 2025-09-20 09:56:47 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:47.643161 | orchestrator | 2025-09-20 09:56:47 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:47.643303 | orchestrator | 2025-09-20 09:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:50.687569 | orchestrator | 2025-09-20 09:56:50 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:50.687681 | orchestrator | 2025-09-20 09:56:50 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:50.691352 | orchestrator | 2025-09-20 09:56:50 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:50.692436 | orchestrator | 2025-09-20 09:56:50 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:50.692456 | orchestrator | 2025-09-20 09:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:53.730618 | orchestrator | 2025-09-20 09:56:53 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:53.730720 | orchestrator | 2025-09-20 09:56:53 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:53.731337 | orchestrator | 2025-09-20 09:56:53 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:53.732577 | orchestrator | 2025-09-20 09:56:53 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:53.732669 | orchestrator | 2025-09-20 09:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:56.772680 | orchestrator | 2025-09-20 09:56:56 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:56.775651 | orchestrator | 2025-09-20 09:56:56 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:56.780201 | orchestrator | 2025-09-20 09:56:56 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:56.780884 | orchestrator | 2025-09-20 09:56:56 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:56.781499 | orchestrator | 2025-09-20 09:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:56:59.821512 | orchestrator | 2025-09-20 09:56:59 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:56:59.824622 | orchestrator | 2025-09-20 09:56:59 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:56:59.827445 | orchestrator | 2025-09-20 09:56:59 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:56:59.828701 | orchestrator | 2025-09-20 09:56:59 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:56:59.830334 | orchestrator | 2025-09-20 09:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:02.870522 | orchestrator | 2025-09-20 09:57:02 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:02.871131 | orchestrator | 2025-09-20 09:57:02 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:02.872010 | orchestrator | 2025-09-20 09:57:02 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:02.873163 | orchestrator | 2025-09-20 09:57:02 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:02.873183 | orchestrator | 2025-09-20 09:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:05.899098 | orchestrator | 2025-09-20 09:57:05 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:05.900175 | orchestrator | 2025-09-20 09:57:05 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:05.900846 | orchestrator | 2025-09-20 09:57:05 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:05.901598 | orchestrator | 2025-09-20 09:57:05 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:05.901618 | orchestrator | 2025-09-20 09:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:08.949353 | orchestrator | 2025-09-20 09:57:08 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:08.950478 | orchestrator | 2025-09-20 09:57:08 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:08.951835 | orchestrator | 2025-09-20 09:57:08 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:08.953674 | orchestrator | 2025-09-20 09:57:08 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:08.954136 | orchestrator | 2025-09-20 09:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:11.989662 | orchestrator | 2025-09-20 09:57:11 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:11.990088 | orchestrator | 2025-09-20 09:57:11 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:11.991535 | orchestrator | 2025-09-20 09:57:11 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:11.992798 | orchestrator | 2025-09-20 09:57:11 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:11.992822 | orchestrator | 2025-09-20 09:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:15.029887 | orchestrator | 2025-09-20 09:57:15 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:15.033745 | orchestrator | 2025-09-20 09:57:15 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:15.034981 | orchestrator | 2025-09-20 09:57:15 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:15.036907 | orchestrator | 2025-09-20 09:57:15 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:15.036932 | orchestrator | 2025-09-20 09:57:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:18.131995 | orchestrator | 2025-09-20 09:57:18 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:18.132098 | orchestrator | 2025-09-20 09:57:18 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:18.132112 | orchestrator | 2025-09-20 09:57:18 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:18.132124 | orchestrator | 2025-09-20 09:57:18 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:18.132135 | orchestrator | 2025-09-20 09:57:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:21.132445 | orchestrator | 2025-09-20 09:57:21 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:21.132659 | orchestrator | 2025-09-20 09:57:21 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:21.133482 | orchestrator | 2025-09-20 09:57:21 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:21.134125 | orchestrator | 2025-09-20 09:57:21 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:21.134150 | orchestrator | 2025-09-20 09:57:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:24.181733 | orchestrator | 2025-09-20 09:57:24 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:24.183692 | orchestrator | 2025-09-20 09:57:24 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:24.186221 | orchestrator | 2025-09-20 09:57:24 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:24.187305 | orchestrator | 2025-09-20 09:57:24 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:24.187395 | orchestrator | 2025-09-20 09:57:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:27.219309 | orchestrator | 2025-09-20 09:57:27 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:27.220471 | orchestrator | 2025-09-20 09:57:27 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:27.221525 | orchestrator | 2025-09-20 09:57:27 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:27.223856 | orchestrator | 2025-09-20 09:57:27 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:27.224084 | orchestrator | 2025-09-20 09:57:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:30.258938 | orchestrator | 2025-09-20 09:57:30 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:30.260014 | orchestrator | 2025-09-20 09:57:30 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:30.261763 | orchestrator | 2025-09-20 09:57:30 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:30.262712 | orchestrator | 2025-09-20 09:57:30 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:30.262792 | orchestrator | 2025-09-20 09:57:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:33.302188 | orchestrator | 2025-09-20 09:57:33 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:33.303308 | orchestrator | 2025-09-20 09:57:33 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:33.306178 | orchestrator | 2025-09-20 09:57:33 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:33.309528 | orchestrator | 2025-09-20 09:57:33 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:33.309588 | orchestrator | 2025-09-20 09:57:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:36.344932 | orchestrator | 2025-09-20 09:57:36 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:36.346171 | orchestrator | 2025-09-20 09:57:36 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:36.347844 | orchestrator | 2025-09-20 09:57:36 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:36.348945 | orchestrator | 2025-09-20 09:57:36 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:36.349023 | orchestrator | 2025-09-20 09:57:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:39.386108 | orchestrator | 2025-09-20 09:57:39 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:39.386581 | orchestrator | 2025-09-20 09:57:39 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:39.387522 | orchestrator | 2025-09-20 09:57:39 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:39.388462 | orchestrator | 2025-09-20 09:57:39 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:39.388488 | orchestrator | 2025-09-20 09:57:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:42.438350 | orchestrator | 2025-09-20 09:57:42 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:42.439489 | orchestrator | 2025-09-20 09:57:42 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:42.439918 | orchestrator | 2025-09-20 09:57:42 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:42.440642 | orchestrator | 2025-09-20 09:57:42 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:42.440679 | orchestrator | 2025-09-20 09:57:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:45.468513 | orchestrator | 2025-09-20 09:57:45 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:45.468599 | orchestrator | 2025-09-20 09:57:45 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:45.469106 | orchestrator | 2025-09-20 09:57:45 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:45.469969 | orchestrator | 2025-09-20 09:57:45 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:45.470066 | orchestrator | 2025-09-20 09:57:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:48.509230 | orchestrator | 2025-09-20 09:57:48 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:48.509474 | orchestrator | 2025-09-20 09:57:48 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:48.510238 | orchestrator | 2025-09-20 09:57:48 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:48.510901 | orchestrator | 2025-09-20 09:57:48 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:48.510922 | orchestrator | 2025-09-20 09:57:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:51.538634 | orchestrator | 2025-09-20 09:57:51 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:51.538838 | orchestrator | 2025-09-20 09:57:51 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:51.540816 | orchestrator | 2025-09-20 09:57:51 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:51.541765 | orchestrator | 2025-09-20 09:57:51 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:51.541794 | orchestrator | 2025-09-20 09:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:54.590557 | orchestrator | 2025-09-20 09:57:54 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:54.592764 | orchestrator | 2025-09-20 09:57:54 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:54.594786 | orchestrator | 2025-09-20 09:57:54 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:54.596837 | orchestrator | 2025-09-20 09:57:54 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:54.597114 | orchestrator | 2025-09-20 09:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:57:57.644023 | orchestrator | 2025-09-20 09:57:57 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:57:57.647374 | orchestrator | 2025-09-20 09:57:57 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:57:57.649800 | orchestrator | 2025-09-20 09:57:57 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:57:57.651129 | orchestrator | 2025-09-20 09:57:57 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state STARTED 2025-09-20 09:57:57.651155 | orchestrator | 2025-09-20 09:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:00.697271 | orchestrator | 2025-09-20 09:58:00 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:00.697400 | orchestrator | 2025-09-20 09:58:00 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:00.697897 | orchestrator | 2025-09-20 09:58:00 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:58:00.698625 | orchestrator | 2025-09-20 09:58:00 | INFO  | Task 1d21e37a-54ab-4569-9867-60fe253cc7ac is in state SUCCESS 2025-09-20 09:58:00.700150 | orchestrator | 2025-09-20 09:58:00 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:00.700590 | orchestrator | 2025-09-20 09:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:03.741984 | orchestrator | 2025-09-20 09:58:03 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:03.742388 | orchestrator | 2025-09-20 09:58:03 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:03.743160 | orchestrator | 2025-09-20 09:58:03 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:58:03.744066 | orchestrator | 2025-09-20 09:58:03 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:03.744085 | orchestrator | 2025-09-20 09:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:06.766141 | orchestrator | 2025-09-20 09:58:06 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:06.767848 | orchestrator | 2025-09-20 09:58:06 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:06.767927 | orchestrator | 2025-09-20 09:58:06 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:58:06.767940 | orchestrator | 2025-09-20 09:58:06 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:06.767951 | orchestrator | 2025-09-20 09:58:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:09.804640 | orchestrator | 2025-09-20 09:58:09 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:09.804791 | orchestrator | 2025-09-20 09:58:09 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:09.806101 | orchestrator | 2025-09-20 09:58:09 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:58:09.806691 | orchestrator | 2025-09-20 09:58:09 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:09.806759 | orchestrator | 2025-09-20 09:58:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:12.838891 | orchestrator | 2025-09-20 09:58:12 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:12.840300 | orchestrator | 2025-09-20 09:58:12 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:12.842515 | orchestrator | 2025-09-20 09:58:12 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state STARTED 2025-09-20 09:58:12.843646 | orchestrator | 2025-09-20 09:58:12 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:12.843677 | orchestrator | 2025-09-20 09:58:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:15.896867 | orchestrator | 2025-09-20 09:58:15 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:15.899075 | orchestrator | 2025-09-20 09:58:15 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:15.905845 | orchestrator | 2025-09-20 09:58:15 | INFO  | Task 50039956-cbf8-4865-af37-ee3314718188 is in state SUCCESS 2025-09-20 09:58:15.907968 | orchestrator | 2025-09-20 09:58:15.908021 | orchestrator | 2025-09-20 09:58:15.908036 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-20 09:58:15.908048 | orchestrator | 2025-09-20 09:58:15.908059 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-20 09:58:15.908070 | orchestrator | Saturday 20 September 2025 09:56:46 +0000 (0:00:00.142) 0:00:00.142 **** 2025-09-20 09:58:15.908081 | orchestrator | changed: [localhost] 2025-09-20 09:58:15.908093 | orchestrator | 2025-09-20 09:58:15.908105 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-20 09:58:15.908116 | orchestrator | Saturday 20 September 2025 09:56:47 +0000 (0:00:01.119) 0:00:01.262 **** 2025-09-20 09:58:15.908127 | orchestrator | changed: [localhost] 2025-09-20 09:58:15.908138 | orchestrator | 2025-09-20 09:58:15.908150 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-20 09:58:15.908160 | orchestrator | Saturday 20 September 2025 09:57:32 +0000 (0:00:44.950) 0:00:46.213 **** 2025-09-20 09:58:15.908171 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-20 09:58:15.908182 | orchestrator | changed: [localhost] 2025-09-20 09:58:15.908193 | orchestrator | 2025-09-20 09:58:15.908203 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:58:15.908214 | orchestrator | 2025-09-20 09:58:15.908225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:58:15.908363 | orchestrator | Saturday 20 September 2025 09:57:57 +0000 (0:00:25.537) 0:01:11.751 **** 2025-09-20 09:58:15.908376 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:58:15.908389 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:58:15.908409 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:58:15.908472 | orchestrator | 2025-09-20 09:58:15.908595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:58:15.908616 | orchestrator | Saturday 20 September 2025 09:57:58 +0000 (0:00:00.347) 0:01:12.099 **** 2025-09-20 09:58:15.908628 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-20 09:58:15.908638 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-20 09:58:15.908649 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-20 09:58:15.908660 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-20 09:58:15.908671 | orchestrator | 2025-09-20 09:58:15.908681 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-20 09:58:15.908692 | orchestrator | skipping: no hosts matched 2025-09-20 09:58:15.908703 | orchestrator | 2025-09-20 09:58:15.908714 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:58:15.908725 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:58:15.908738 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:58:15.908751 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:58:15.908779 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 09:58:15.908799 | orchestrator | 2025-09-20 09:58:15.908817 | orchestrator | 2025-09-20 09:58:15.908834 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:58:15.908852 | orchestrator | Saturday 20 September 2025 09:57:58 +0000 (0:00:00.480) 0:01:12.579 **** 2025-09-20 09:58:15.908871 | orchestrator | =============================================================================== 2025-09-20 09:58:15.908889 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 44.95s 2025-09-20 09:58:15.908917 | orchestrator | Download ironic-agent kernel ------------------------------------------- 25.54s 2025-09-20 09:58:15.908938 | orchestrator | Ensure the destination directory exists --------------------------------- 1.12s 2025-09-20 09:58:15.908956 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2025-09-20 09:58:15.908975 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-09-20 09:58:15.908994 | orchestrator | 2025-09-20 09:58:15.909012 | orchestrator | 2025-09-20 09:58:15.909030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:58:15.909048 | orchestrator | 2025-09-20 09:58:15.909065 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:58:15.909083 | orchestrator | Saturday 20 September 2025 09:55:06 +0000 (0:00:00.635) 0:00:00.635 **** 2025-09-20 09:58:15.909102 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:58:15.909127 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:58:15.909151 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:58:15.909169 | orchestrator | 2025-09-20 09:58:15.909188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:58:15.909215 | orchestrator | Saturday 20 September 2025 09:55:06 +0000 (0:00:00.498) 0:00:01.133 **** 2025-09-20 09:58:15.909236 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-20 09:58:15.909282 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-20 09:58:15.909301 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-20 09:58:15.909320 | orchestrator | 2025-09-20 09:58:15.909338 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-20 09:58:15.909357 | orchestrator | 2025-09-20 09:58:15.909376 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 09:58:15.909390 | orchestrator | Saturday 20 September 2025 09:55:07 +0000 (0:00:00.877) 0:00:02.011 **** 2025-09-20 09:58:15.909416 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:58:15.909427 | orchestrator | 2025-09-20 09:58:15.909438 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-20 09:58:15.909466 | orchestrator | Saturday 20 September 2025 09:55:08 +0000 (0:00:00.618) 0:00:02.629 **** 2025-09-20 09:58:15.909478 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-20 09:58:15.909488 | orchestrator | 2025-09-20 09:58:15.909499 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-20 09:58:15.909510 | orchestrator | Saturday 20 September 2025 09:55:11 +0000 (0:00:03.523) 0:00:06.152 **** 2025-09-20 09:58:15.909520 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-20 09:58:15.909531 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-20 09:58:15.909542 | orchestrator | 2025-09-20 09:58:15.909552 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-20 09:58:15.909563 | orchestrator | Saturday 20 September 2025 09:55:18 +0000 (0:00:06.832) 0:00:12.985 **** 2025-09-20 09:58:15.909574 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 09:58:15.909584 | orchestrator | 2025-09-20 09:58:15.909595 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-20 09:58:15.909605 | orchestrator | Saturday 20 September 2025 09:55:21 +0000 (0:00:03.363) 0:00:16.349 **** 2025-09-20 09:58:15.909616 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:58:15.909627 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-20 09:58:15.909637 | orchestrator | 2025-09-20 09:58:15.909648 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-20 09:58:15.909659 | orchestrator | Saturday 20 September 2025 09:55:25 +0000 (0:00:03.865) 0:00:20.214 **** 2025-09-20 09:58:15.909669 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:58:15.909680 | orchestrator | 2025-09-20 09:58:15.909691 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-20 09:58:15.909702 | orchestrator | Saturday 20 September 2025 09:55:29 +0000 (0:00:03.404) 0:00:23.619 **** 2025-09-20 09:58:15.909712 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-20 09:58:15.909723 | orchestrator | 2025-09-20 09:58:15.909733 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-20 09:58:15.909744 | orchestrator | Saturday 20 September 2025 09:55:34 +0000 (0:00:05.247) 0:00:28.866 **** 2025-09-20 09:58:15.909771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.909798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.909842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.909862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.909994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910230 | orchestrator | 2025-09-20 09:58:15.910330 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-20 09:58:15.910359 | orchestrator | Saturday 20 September 2025 09:55:38 +0000 (0:00:03.750) 0:00:32.617 **** 2025-09-20 09:58:15.910378 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.910515 | orchestrator | 2025-09-20 09:58:15.910541 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-20 09:58:15.910561 | orchestrator | Saturday 20 September 2025 09:55:38 +0000 (0:00:00.267) 0:00:32.886 **** 2025-09-20 09:58:15.910580 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.910596 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:15.910607 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:15.910618 | orchestrator | 2025-09-20 09:58:15.910628 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 09:58:15.910639 | orchestrator | Saturday 20 September 2025 09:55:39 +0000 (0:00:00.897) 0:00:33.784 **** 2025-09-20 09:58:15.910650 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:58:15.910661 | orchestrator | 2025-09-20 09:58:15.910671 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-20 09:58:15.910682 | orchestrator | Saturday 20 September 2025 09:55:41 +0000 (0:00:02.396) 0:00:36.180 **** 2025-09-20 09:58:15.910702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.910727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.910749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.910760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.910996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911055 | orchestrator | 2025-09-20 09:58:15.911065 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-20 09:58:15.911075 | orchestrator | Saturday 20 September 2025 09:55:49 +0000 (0:00:07.611) 0:00:43.791 **** 2025-09-20 09:58:15.911165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.911176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.911186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.911220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.911231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911351 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:15.911361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911377 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.911387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.911401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.911412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911488 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:15.911498 | orchestrator | 2025-09-20 09:58:15.911508 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-20 09:58:15.911518 | orchestrator | Saturday 20 September 2025 09:55:51 +0000 (0:00:01.839) 0:00:45.631 **** 2025-09-20 09:58:15.911528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.911542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.911553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911605 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.911615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.911629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.911639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911691 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:15.911701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.911716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.911726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.911779 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:15.911789 | orchestrator | 2025-09-20 09:58:15.911799 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-20 09:58:15.911808 | orchestrator | Saturday 20 September 2025 09:55:52 +0000 (0:00:01.514) 0:00:47.146 **** 2025-09-20 09:58:15.911818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.911836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.911846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.911857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.911991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.912002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.912016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.912026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.912036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913046 | orchestrator | 2025-09-20 09:58:15.913057 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-20 09:58:15.913067 | orchestrator | Saturday 20 September 2025 09:55:59 +0000 (0:00:06.700) 0:00:53.846 **** 2025-09-20 09:58:15.913077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.913089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.913105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.913115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913346 | orchestrator | 2025-09-20 09:58:15.913355 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-20 09:58:15.913365 | orchestrator | Saturday 20 September 2025 09:56:18 +0000 (0:00:18.899) 0:01:12.745 **** 2025-09-20 09:58:15.913374 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-20 09:58:15.913384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-20 09:58:15.913399 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-20 09:58:15.913409 | orchestrator | 2025-09-20 09:58:15.913418 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-20 09:58:15.913427 | orchestrator | Saturday 20 September 2025 09:56:24 +0000 (0:00:06.451) 0:01:19.196 **** 2025-09-20 09:58:15.913437 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-20 09:58:15.913446 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-20 09:58:15.913456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-20 09:58:15.913465 | orchestrator | 2025-09-20 09:58:15.913475 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-20 09:58:15.913484 | orchestrator | Saturday 20 September 2025 09:56:28 +0000 (0:00:03.644) 0:01:22.841 **** 2025-09-20 09:58:15.913494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.913508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.913519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.913535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913757 | orchestrator | 2025-09-20 09:58:15.913768 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-20 09:58:15.913779 | orchestrator | Saturday 20 September 2025 09:56:31 +0000 (0:00:03.565) 0:01:26.406 **** 2025-09-20 09:58:15.913791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.913802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.913824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.913836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.913959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.913998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914087 | orchestrator | 2025-09-20 09:58:15.914096 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 09:58:15.914106 | orchestrator | Saturday 20 September 2025 09:56:35 +0000 (0:00:03.100) 0:01:29.506 **** 2025-09-20 09:58:15.914115 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.914125 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:15.914135 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:15.914144 | orchestrator | 2025-09-20 09:58:15.914153 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-20 09:58:15.914163 | orchestrator | Saturday 20 September 2025 09:56:35 +0000 (0:00:00.212) 0:01:29.719 **** 2025-09-20 09:58:15.914181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.914192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.914209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914310 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.914327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.914338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.914355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914400 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:15.914416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-20 09:58:15.914427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-20 09:58:15.914443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-20 09:58:15.914487 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:15.914497 | orchestrator | 2025-09-20 09:58:15.914506 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-20 09:58:15.914516 | orchestrator | Saturday 20 September 2025 09:56:36 +0000 (0:00:01.669) 0:01:31.388 **** 2025-09-20 09:58:15.914532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.914549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.914560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-20 09:58:15.914574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-20 09:58:15.914765 | orchestrator | 2025-09-20 09:58:15.914775 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-20 09:58:15.914785 | orchestrator | Saturday 20 September 2025 09:56:42 +0000 (0:00:05.301) 0:01:36.690 **** 2025-09-20 09:58:15.914794 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:15.914802 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:15.914810 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:15.914818 | orchestrator | 2025-09-20 09:58:15.914825 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-20 09:58:15.914833 | orchestrator | Saturday 20 September 2025 09:56:42 +0000 (0:00:00.548) 0:01:37.238 **** 2025-09-20 09:58:15.914841 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-20 09:58:15.914848 | orchestrator | 2025-09-20 09:58:15.914856 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-20 09:58:15.914869 | orchestrator | Saturday 20 September 2025 09:56:45 +0000 (0:00:02.314) 0:01:39.552 **** 2025-09-20 09:58:15.914877 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 09:58:15.914885 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-20 09:58:15.914892 | orchestrator | 2025-09-20 09:58:15.914900 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-20 09:58:15.914908 | orchestrator | Saturday 20 September 2025 09:56:47 +0000 (0:00:02.725) 0:01:42.278 **** 2025-09-20 09:58:15.914916 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.914923 | orchestrator | 2025-09-20 09:58:15.914935 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-20 09:58:15.914944 | orchestrator | Saturday 20 September 2025 09:57:04 +0000 (0:00:16.516) 0:01:58.794 **** 2025-09-20 09:58:15.914951 | orchestrator | 2025-09-20 09:58:15.914959 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-20 09:58:15.914967 | orchestrator | Saturday 20 September 2025 09:57:04 +0000 (0:00:00.189) 0:01:58.983 **** 2025-09-20 09:58:15.914975 | orchestrator | 2025-09-20 09:58:15.914982 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-20 09:58:15.914990 | orchestrator | Saturday 20 September 2025 09:57:04 +0000 (0:00:00.061) 0:01:59.045 **** 2025-09-20 09:58:15.914998 | orchestrator | 2025-09-20 09:58:15.915005 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-20 09:58:15.915013 | orchestrator | Saturday 20 September 2025 09:57:04 +0000 (0:00:00.065) 0:01:59.110 **** 2025-09-20 09:58:15.915021 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915029 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:15.915036 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:15.915044 | orchestrator | 2025-09-20 09:58:15.915052 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-20 09:58:15.915060 | orchestrator | Saturday 20 September 2025 09:57:18 +0000 (0:00:13.579) 0:02:12.690 **** 2025-09-20 09:58:15.915067 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915075 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:15.915083 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:15.915091 | orchestrator | 2025-09-20 09:58:15.915098 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-20 09:58:15.915106 | orchestrator | Saturday 20 September 2025 09:57:27 +0000 (0:00:08.881) 0:02:21.572 **** 2025-09-20 09:58:15.915114 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915122 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:15.915129 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:15.915137 | orchestrator | 2025-09-20 09:58:15.915145 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-20 09:58:15.915152 | orchestrator | Saturday 20 September 2025 09:57:35 +0000 (0:00:08.813) 0:02:30.385 **** 2025-09-20 09:58:15.915160 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:15.915168 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915176 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:15.915183 | orchestrator | 2025-09-20 09:58:15.915191 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-20 09:58:15.915199 | orchestrator | Saturday 20 September 2025 09:57:48 +0000 (0:00:12.318) 0:02:42.703 **** 2025-09-20 09:58:15.915207 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915215 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:15.915222 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:15.915230 | orchestrator | 2025-09-20 09:58:15.915238 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-20 09:58:15.915259 | orchestrator | Saturday 20 September 2025 09:57:59 +0000 (0:00:10.859) 0:02:53.563 **** 2025-09-20 09:58:15.915267 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915275 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:15.915283 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:15.915291 | orchestrator | 2025-09-20 09:58:15.915303 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-20 09:58:15.915315 | orchestrator | Saturday 20 September 2025 09:58:06 +0000 (0:00:07.769) 0:03:01.332 **** 2025-09-20 09:58:15.915323 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:15.915330 | orchestrator | 2025-09-20 09:58:15.915338 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:58:15.915347 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 09:58:15.915355 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:58:15.915363 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:58:15.915371 | orchestrator | 2025-09-20 09:58:15.915379 | orchestrator | 2025-09-20 09:58:15.915386 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:58:15.915394 | orchestrator | Saturday 20 September 2025 09:58:13 +0000 (0:00:06.886) 0:03:08.219 **** 2025-09-20 09:58:15.915402 | orchestrator | =============================================================================== 2025-09-20 09:58:15.915410 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.90s 2025-09-20 09:58:15.915417 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.52s 2025-09-20 09:58:15.915425 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.58s 2025-09-20 09:58:15.915433 | orchestrator | designate : Restart designate-producer container ----------------------- 12.32s 2025-09-20 09:58:15.915441 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.86s 2025-09-20 09:58:15.915448 | orchestrator | designate : Restart designate-api container ----------------------------- 8.88s 2025-09-20 09:58:15.915456 | orchestrator | designate : Restart designate-central container ------------------------- 8.81s 2025-09-20 09:58:15.915464 | orchestrator | designate : Restart designate-worker container -------------------------- 7.77s 2025-09-20 09:58:15.915472 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.61s 2025-09-20 09:58:15.915479 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.89s 2025-09-20 09:58:15.915487 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.83s 2025-09-20 09:58:15.915495 | orchestrator | designate : Copying over config.json files for services ----------------- 6.70s 2025-09-20 09:58:15.915509 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.45s 2025-09-20 09:58:15.915522 | orchestrator | designate : Check designate containers ---------------------------------- 5.30s 2025-09-20 09:58:15.915536 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 5.25s 2025-09-20 09:58:15.915549 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.87s 2025-09-20 09:58:15.915561 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.75s 2025-09-20 09:58:15.915574 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.64s 2025-09-20 09:58:15.915587 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.57s 2025-09-20 09:58:15.915599 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.52s 2025-09-20 09:58:15.915619 | orchestrator | 2025-09-20 09:58:15 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:15.915633 | orchestrator | 2025-09-20 09:58:15 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:15.915646 | orchestrator | 2025-09-20 09:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:18.953561 | orchestrator | 2025-09-20 09:58:18 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:18.954954 | orchestrator | 2025-09-20 09:58:18 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:18.957740 | orchestrator | 2025-09-20 09:58:18 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:18.959000 | orchestrator | 2025-09-20 09:58:18 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:18.959180 | orchestrator | 2025-09-20 09:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:22.000863 | orchestrator | 2025-09-20 09:58:21 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:22.002238 | orchestrator | 2025-09-20 09:58:22 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:22.004201 | orchestrator | 2025-09-20 09:58:22 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:22.005873 | orchestrator | 2025-09-20 09:58:22 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:22.006090 | orchestrator | 2025-09-20 09:58:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:25.040429 | orchestrator | 2025-09-20 09:58:25 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:25.041852 | orchestrator | 2025-09-20 09:58:25 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:25.044284 | orchestrator | 2025-09-20 09:58:25 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:25.046521 | orchestrator | 2025-09-20 09:58:25 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:25.046542 | orchestrator | 2025-09-20 09:58:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:28.098565 | orchestrator | 2025-09-20 09:58:28 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:28.101049 | orchestrator | 2025-09-20 09:58:28 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state STARTED 2025-09-20 09:58:28.103706 | orchestrator | 2025-09-20 09:58:28 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:28.105953 | orchestrator | 2025-09-20 09:58:28 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:28.105986 | orchestrator | 2025-09-20 09:58:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:31.169660 | orchestrator | 2025-09-20 09:58:31 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:31.173566 | orchestrator | 2025-09-20 09:58:31 | INFO  | Task 5010fb1c-65ed-4903-90e6-48af0b80eca1 is in state SUCCESS 2025-09-20 09:58:31.175062 | orchestrator | 2025-09-20 09:58:31.175094 | orchestrator | 2025-09-20 09:58:31.175106 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:58:31.175118 | orchestrator | 2025-09-20 09:58:31.175130 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:58:31.175141 | orchestrator | Saturday 20 September 2025 09:54:23 +0000 (0:00:00.279) 0:00:00.279 **** 2025-09-20 09:58:31.175152 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:58:31.175164 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:58:31.175175 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:58:31.175186 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:58:31.175197 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:58:31.175207 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:58:31.175218 | orchestrator | 2025-09-20 09:58:31.175229 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:58:31.175240 | orchestrator | Saturday 20 September 2025 09:54:23 +0000 (0:00:00.585) 0:00:00.865 **** 2025-09-20 09:58:31.175300 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-20 09:58:31.175312 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-20 09:58:31.175324 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-20 09:58:31.175334 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-20 09:58:31.175345 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-20 09:58:31.175356 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-20 09:58:31.175367 | orchestrator | 2025-09-20 09:58:31.175378 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-20 09:58:31.175388 | orchestrator | 2025-09-20 09:58:31.175399 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 09:58:31.175410 | orchestrator | Saturday 20 September 2025 09:54:24 +0000 (0:00:00.551) 0:00:01.417 **** 2025-09-20 09:58:31.175422 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:58:31.175434 | orchestrator | 2025-09-20 09:58:31.175445 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-20 09:58:31.175456 | orchestrator | Saturday 20 September 2025 09:54:25 +0000 (0:00:01.043) 0:00:02.461 **** 2025-09-20 09:58:31.175466 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:58:31.175477 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:58:31.175488 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:58:31.175499 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:58:31.175509 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:58:31.175520 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:58:31.175531 | orchestrator | 2025-09-20 09:58:31.175542 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-20 09:58:31.175553 | orchestrator | Saturday 20 September 2025 09:54:26 +0000 (0:00:01.127) 0:00:03.588 **** 2025-09-20 09:58:31.175564 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:58:31.175584 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:58:31.175596 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:58:31.175607 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:58:31.175617 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:58:31.175628 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:58:31.175639 | orchestrator | 2025-09-20 09:58:31.175651 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-20 09:58:31.175663 | orchestrator | Saturday 20 September 2025 09:54:27 +0000 (0:00:00.980) 0:00:04.569 **** 2025-09-20 09:58:31.175675 | orchestrator | ok: [testbed-node-0] => { 2025-09-20 09:58:31.175689 | orchestrator |  "changed": false, 2025-09-20 09:58:31.175701 | orchestrator |  "msg": "All assertions passed" 2025-09-20 09:58:31.175714 | orchestrator | } 2025-09-20 09:58:31.175726 | orchestrator | ok: [testbed-node-1] => { 2025-09-20 09:58:31.175738 | orchestrator |  "changed": false, 2025-09-20 09:58:31.175751 | orchestrator |  "msg": "All assertions passed" 2025-09-20 09:58:31.175763 | orchestrator | } 2025-09-20 09:58:31.175775 | orchestrator | ok: [testbed-node-2] => { 2025-09-20 09:58:31.175787 | orchestrator |  "changed": false, 2025-09-20 09:58:31.175799 | orchestrator |  "msg": "All assertions passed" 2025-09-20 09:58:31.175811 | orchestrator | } 2025-09-20 09:58:31.175824 | orchestrator | ok: [testbed-node-3] => { 2025-09-20 09:58:31.175835 | orchestrator |  "changed": false, 2025-09-20 09:58:31.175848 | orchestrator |  "msg": "All assertions passed" 2025-09-20 09:58:31.175860 | orchestrator | } 2025-09-20 09:58:31.175872 | orchestrator | ok: [testbed-node-4] => { 2025-09-20 09:58:31.175884 | orchestrator |  "changed": false, 2025-09-20 09:58:31.175911 | orchestrator |  "msg": "All assertions passed" 2025-09-20 09:58:31.175924 | orchestrator | } 2025-09-20 09:58:31.175936 | orchestrator | ok: [testbed-node-5] => { 2025-09-20 09:58:31.175948 | orchestrator |  "changed": false, 2025-09-20 09:58:31.175960 | orchestrator |  "msg": "All assertions passed" 2025-09-20 09:58:31.175972 | orchestrator | } 2025-09-20 09:58:31.175991 | orchestrator | 2025-09-20 09:58:31.176003 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-20 09:58:31.176014 | orchestrator | Saturday 20 September 2025 09:54:28 +0000 (0:00:00.758) 0:00:05.327 **** 2025-09-20 09:58:31.176025 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.176036 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.176046 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.176057 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.176068 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.176078 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.176089 | orchestrator | 2025-09-20 09:58:31.176100 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-20 09:58:31.176111 | orchestrator | Saturday 20 September 2025 09:54:28 +0000 (0:00:00.584) 0:00:05.912 **** 2025-09-20 09:58:31.176122 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-20 09:58:31.176132 | orchestrator | 2025-09-20 09:58:31.176143 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-20 09:58:31.176154 | orchestrator | Saturday 20 September 2025 09:54:32 +0000 (0:00:03.468) 0:00:09.381 **** 2025-09-20 09:58:31.176165 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-20 09:58:31.176177 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-20 09:58:31.176187 | orchestrator | 2025-09-20 09:58:31.176211 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-20 09:58:31.176222 | orchestrator | Saturday 20 September 2025 09:54:38 +0000 (0:00:06.455) 0:00:15.836 **** 2025-09-20 09:58:31.176233 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 09:58:31.176244 | orchestrator | 2025-09-20 09:58:31.176289 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-20 09:58:31.176300 | orchestrator | Saturday 20 September 2025 09:54:42 +0000 (0:00:03.438) 0:00:19.275 **** 2025-09-20 09:58:31.176311 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:58:31.176322 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-20 09:58:31.176333 | orchestrator | 2025-09-20 09:58:31.176344 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-20 09:58:31.176355 | orchestrator | Saturday 20 September 2025 09:54:46 +0000 (0:00:04.297) 0:00:23.573 **** 2025-09-20 09:58:31.176365 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:58:31.176376 | orchestrator | 2025-09-20 09:58:31.176387 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-20 09:58:31.176397 | orchestrator | Saturday 20 September 2025 09:54:49 +0000 (0:00:03.533) 0:00:27.107 **** 2025-09-20 09:58:31.176408 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-20 09:58:31.176419 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-20 09:58:31.176429 | orchestrator | 2025-09-20 09:58:31.176440 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 09:58:31.176451 | orchestrator | Saturday 20 September 2025 09:54:57 +0000 (0:00:07.709) 0:00:34.816 **** 2025-09-20 09:58:31.176461 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.176472 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.176483 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.176493 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.176504 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.176515 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.176526 | orchestrator | 2025-09-20 09:58:31.176537 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-20 09:58:31.176547 | orchestrator | Saturday 20 September 2025 09:54:58 +0000 (0:00:00.891) 0:00:35.708 **** 2025-09-20 09:58:31.176558 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.176568 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.176586 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.176597 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.176608 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.176619 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.176630 | orchestrator | 2025-09-20 09:58:31.176640 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-20 09:58:31.176651 | orchestrator | Saturday 20 September 2025 09:55:01 +0000 (0:00:02.457) 0:00:38.165 **** 2025-09-20 09:58:31.176662 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:58:31.176673 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:58:31.176684 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:58:31.176694 | orchestrator | ok: [testbed-node-5] 2025-09-20 09:58:31.176705 | orchestrator | ok: [testbed-node-3] 2025-09-20 09:58:31.176716 | orchestrator | ok: [testbed-node-4] 2025-09-20 09:58:31.176727 | orchestrator | 2025-09-20 09:58:31.176737 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-20 09:58:31.176748 | orchestrator | Saturday 20 September 2025 09:55:03 +0000 (0:00:02.016) 0:00:40.182 **** 2025-09-20 09:58:31.176759 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.176769 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.176780 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.176791 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.176802 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.176812 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.176823 | orchestrator | 2025-09-20 09:58:31.176833 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-20 09:58:31.176844 | orchestrator | Saturday 20 September 2025 09:55:05 +0000 (0:00:02.322) 0:00:42.505 **** 2025-09-20 09:58:31.176883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.176922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.176936 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.176955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.176967 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.176983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.176994 | orchestrator | 2025-09-20 09:58:31.177005 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-20 09:58:31.177017 | orchestrator | Saturday 20 September 2025 09:55:08 +0000 (0:00:03.279) 0:00:45.784 **** 2025-09-20 09:58:31.177028 | orchestrator | [WARNING]: Skipped 2025-09-20 09:58:31.177040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-20 09:58:31.177051 | orchestrator | due to this access issue: 2025-09-20 09:58:31.177062 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-20 09:58:31.177072 | orchestrator | a directory 2025-09-20 09:58:31.177083 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 09:58:31.177094 | orchestrator | 2025-09-20 09:58:31.177105 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 09:58:31.177121 | orchestrator | Saturday 20 September 2025 09:55:09 +0000 (0:00:00.844) 0:00:46.629 **** 2025-09-20 09:58:31.177133 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 09:58:31.177145 | orchestrator | 2025-09-20 09:58:31.177156 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-20 09:58:31.177173 | orchestrator | Saturday 20 September 2025 09:55:10 +0000 (0:00:01.285) 0:00:47.914 **** 2025-09-20 09:58:31.177185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.177197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.177213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.177225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.177245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.177284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.177295 | orchestrator | 2025-09-20 09:58:31.177307 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-20 09:58:31.177318 | orchestrator | Saturday 20 September 2025 09:55:13 +0000 (0:00:03.135) 0:00:51.049 **** 2025-09-20 09:58:31.177329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.177341 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.177357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.177369 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.177380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.177403 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.177416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177427 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.177438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177449 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.177461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177472 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.177483 | orchestrator | 2025-09-20 09:58:31.177494 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-20 09:58:31.177504 | orchestrator | Saturday 20 September 2025 09:55:16 +0000 (0:00:02.972) 0:00:54.022 **** 2025-09-20 09:58:31.177521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.177533 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.177565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.177591 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.177603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177614 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.177625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.177636 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.177647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177658 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.177675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177692 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.177703 | orchestrator | 2025-09-20 09:58:31.177714 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-20 09:58:31.177725 | orchestrator | Saturday 20 September 2025 09:55:19 +0000 (0:00:03.083) 0:00:57.105 **** 2025-09-20 09:58:31.177735 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.177746 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.177757 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.177768 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.177778 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.177789 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.177799 | orchestrator | 2025-09-20 09:58:31.177810 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-20 09:58:31.177827 | orchestrator | Saturday 20 September 2025 09:55:22 +0000 (0:00:02.590) 0:00:59.695 **** 2025-09-20 09:58:31.177838 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.177849 | orchestrator | 2025-09-20 09:58:31.177859 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-20 09:58:31.177870 | orchestrator | Saturday 20 September 2025 09:55:22 +0000 (0:00:00.135) 0:00:59.831 **** 2025-09-20 09:58:31.177881 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.177891 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.177902 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.177912 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.177923 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.177934 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.177944 | orchestrator | 2025-09-20 09:58:31.177955 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-20 09:58:31.177966 | orchestrator | Saturday 20 September 2025 09:55:23 +0000 (0:00:00.674) 0:01:00.506 **** 2025-09-20 09:58:31.177976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.177988 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.177999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.178010 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.178078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.178679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.178841 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.178864 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.178878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.178891 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.178902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.178914 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.178925 | orchestrator | 2025-09-20 09:58:31.178937 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-20 09:58:31.178949 | orchestrator | Saturday 20 September 2025 09:55:26 +0000 (0:00:02.999) 0:01:03.505 **** 2025-09-20 09:58:31.178960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.178998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.179095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.179108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.179129 | orchestrator | 2025-09-20 09:58:31.179140 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-20 09:58:31.179152 | orchestrator | Saturday 20 September 2025 09:55:31 +0000 (0:00:04.767) 0:01:08.273 **** 2025-09-20 09:58:31.179168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.179284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.179299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.179312 | orchestrator | 2025-09-20 09:58:31.179325 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-20 09:58:31.179338 | orchestrator | Saturday 20 September 2025 09:55:38 +0000 (0:00:07.058) 0:01:15.331 **** 2025-09-20 09:58:31.179362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.179375 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.179388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.179408 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.179421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.179434 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.179452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.179466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.179479 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.179491 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.179512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.179525 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.179538 | orchestrator | 2025-09-20 09:58:31.179551 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-20 09:58:31.179564 | orchestrator | Saturday 20 September 2025 09:55:43 +0000 (0:00:05.067) 0:01:20.399 **** 2025-09-20 09:58:31.179575 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.179585 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.179596 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:31.179607 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.179625 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:31.179635 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:31.179646 | orchestrator | 2025-09-20 09:58:31.179657 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-20 09:58:31.179668 | orchestrator | Saturday 20 September 2025 09:55:46 +0000 (0:00:03.550) 0:01:23.950 **** 2025-09-20 09:58:31.179679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.179690 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.179701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.179712 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.179727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.179739 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.179758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.179801 | orchestrator | 2025-09-20 09:58:31.179811 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-20 09:58:31.179822 | orchestrator | Saturday 20 September 2025 09:55:52 +0000 (0:00:05.383) 0:01:29.333 **** 2025-09-20 09:58:31.179833 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.179844 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.179854 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.179864 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.179875 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.179885 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.179896 | orchestrator | 2025-09-20 09:58:31.179907 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-20 09:58:31.179917 | orchestrator | Saturday 20 September 2025 09:55:54 +0000 (0:00:02.338) 0:01:31.672 **** 2025-09-20 09:58:31.179928 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.179939 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.179954 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.179965 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.179975 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.179986 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.179996 | orchestrator | 2025-09-20 09:58:31.180007 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-20 09:58:31.180018 | orchestrator | Saturday 20 September 2025 09:55:56 +0000 (0:00:01.891) 0:01:33.563 **** 2025-09-20 09:58:31.180029 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180039 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180050 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180060 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180071 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180081 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180092 | orchestrator | 2025-09-20 09:58:31.180103 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-20 09:58:31.180113 | orchestrator | Saturday 20 September 2025 09:55:58 +0000 (0:00:02.093) 0:01:35.657 **** 2025-09-20 09:58:31.180124 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180135 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180145 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180156 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180166 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180184 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180196 | orchestrator | 2025-09-20 09:58:31.180207 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-20 09:58:31.180217 | orchestrator | Saturday 20 September 2025 09:56:01 +0000 (0:00:03.151) 0:01:38.808 **** 2025-09-20 09:58:31.180228 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180239 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180266 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180278 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180294 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180306 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180317 | orchestrator | 2025-09-20 09:58:31.180327 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-20 09:58:31.180338 | orchestrator | Saturday 20 September 2025 09:56:04 +0000 (0:00:02.877) 0:01:41.685 **** 2025-09-20 09:58:31.180349 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180359 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180370 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180381 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180391 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180402 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180412 | orchestrator | 2025-09-20 09:58:31.180423 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-20 09:58:31.180434 | orchestrator | Saturday 20 September 2025 09:56:06 +0000 (0:00:02.431) 0:01:44.117 **** 2025-09-20 09:58:31.180445 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 09:58:31.180456 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180466 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 09:58:31.180477 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180488 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 09:58:31.180498 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180509 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 09:58:31.180520 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180531 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 09:58:31.180541 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180552 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-20 09:58:31.180563 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180574 | orchestrator | 2025-09-20 09:58:31.180584 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-20 09:58:31.180595 | orchestrator | Saturday 20 September 2025 09:56:09 +0000 (0:00:02.222) 0:01:46.340 **** 2025-09-20 09:58:31.180606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.180618 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.180658 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.180689 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.180711 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.180733 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.180761 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180772 | orchestrator | 2025-09-20 09:58:31.180783 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-20 09:58:31.180794 | orchestrator | Saturday 20 September 2025 09:56:11 +0000 (0:00:02.306) 0:01:48.647 **** 2025-09-20 09:58:31.180809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.180821 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.180838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.180850 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.180861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.180872 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.180883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.180894 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.180917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.180928 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.180939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.180951 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.180961 | orchestrator | 2025-09-20 09:58:31.180972 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-20 09:58:31.180983 | orchestrator | Saturday 20 September 2025 09:56:15 +0000 (0:00:03.628) 0:01:52.276 **** 2025-09-20 09:58:31.180993 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181010 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181021 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181031 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181042 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181053 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181063 | orchestrator | 2025-09-20 09:58:31.181074 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-20 09:58:31.181085 | orchestrator | Saturday 20 September 2025 09:56:18 +0000 (0:00:03.085) 0:01:55.361 **** 2025-09-20 09:58:31.181096 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181106 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181117 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181128 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:58:31.181138 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:58:31.181149 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:58:31.181159 | orchestrator | 2025-09-20 09:58:31.181170 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-20 09:58:31.181181 | orchestrator | Saturday 20 September 2025 09:56:22 +0000 (0:00:04.593) 0:01:59.955 **** 2025-09-20 09:58:31.181192 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181203 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181213 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181224 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181234 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181245 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181302 | orchestrator | 2025-09-20 09:58:31.181314 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-20 09:58:31.181331 | orchestrator | Saturday 20 September 2025 09:56:24 +0000 (0:00:02.012) 0:02:01.968 **** 2025-09-20 09:58:31.181342 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181353 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181364 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181374 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181385 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181396 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181407 | orchestrator | 2025-09-20 09:58:31.181418 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-20 09:58:31.181428 | orchestrator | Saturday 20 September 2025 09:56:27 +0000 (0:00:02.681) 0:02:04.649 **** 2025-09-20 09:58:31.181439 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181449 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181460 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181471 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181482 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181492 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181503 | orchestrator | 2025-09-20 09:58:31.181514 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-20 09:58:31.181524 | orchestrator | Saturday 20 September 2025 09:56:30 +0000 (0:00:02.665) 0:02:07.315 **** 2025-09-20 09:58:31.181535 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181546 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181557 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181567 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181578 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181589 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181599 | orchestrator | 2025-09-20 09:58:31.181610 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-20 09:58:31.181621 | orchestrator | Saturday 20 September 2025 09:56:33 +0000 (0:00:03.153) 0:02:10.469 **** 2025-09-20 09:58:31.181632 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181642 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181653 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181664 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181674 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181685 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181696 | orchestrator | 2025-09-20 09:58:31.181706 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-20 09:58:31.181717 | orchestrator | Saturday 20 September 2025 09:56:35 +0000 (0:00:02.127) 0:02:12.597 **** 2025-09-20 09:58:31.181728 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181739 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181750 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181761 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181776 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181787 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181798 | orchestrator | 2025-09-20 09:58:31.181808 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-20 09:58:31.181819 | orchestrator | Saturday 20 September 2025 09:56:37 +0000 (0:00:02.506) 0:02:15.103 **** 2025-09-20 09:58:31.181830 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181841 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.181851 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181862 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181872 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.181883 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.181894 | orchestrator | 2025-09-20 09:58:31.181904 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-20 09:58:31.181915 | orchestrator | Saturday 20 September 2025 09:56:40 +0000 (0:00:02.949) 0:02:18.053 **** 2025-09-20 09:58:31.181926 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 09:58:31.181943 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.181954 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 09:58:31.181965 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.181976 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 09:58:31.181987 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.181997 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 09:58:31.182008 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.182120 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 09:58:31.182135 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.182146 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-20 09:58:31.182157 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.182168 | orchestrator | 2025-09-20 09:58:31.182179 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-20 09:58:31.182190 | orchestrator | Saturday 20 September 2025 09:56:43 +0000 (0:00:02.824) 0:02:20.877 **** 2025-09-20 09:58:31.182202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.182213 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.182225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.182236 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.182310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.182334 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.182345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.182356 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.182377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-20 09:58:31.182389 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.182400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-20 09:58:31.182410 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.182419 | orchestrator | 2025-09-20 09:58:31.182429 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-20 09:58:31.182439 | orchestrator | Saturday 20 September 2025 09:56:46 +0000 (0:00:02.653) 0:02:23.531 **** 2025-09-20 09:58:31.182449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.182464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.182486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-20 09:58:31.182497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.182508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.182518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-20 09:58:31.182528 | orchestrator | 2025-09-20 09:58:31.182539 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-20 09:58:31.182554 | orchestrator | Saturday 20 September 2025 09:56:50 +0000 (0:00:04.374) 0:02:27.905 **** 2025-09-20 09:58:31.182564 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:58:31.182574 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:58:31.182583 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:58:31.182593 | orchestrator | skipping: [testbed-node-3] 2025-09-20 09:58:31.182603 | orchestrator | skipping: [testbed-node-4] 2025-09-20 09:58:31.182612 | orchestrator | skipping: [testbed-node-5] 2025-09-20 09:58:31.182622 | orchestrator | 2025-09-20 09:58:31.182635 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-20 09:58:31.182645 | orchestrator | Saturday 20 September 2025 09:56:51 +0000 (0:00:00.861) 0:02:28.766 **** 2025-09-20 09:58:31.182655 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:31.182665 | orchestrator | 2025-09-20 09:58:31.182674 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-20 09:58:31.182684 | orchestrator | Saturday 20 September 2025 09:56:53 +0000 (0:00:01.946) 0:02:30.713 **** 2025-09-20 09:58:31.182694 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:31.182703 | orchestrator | 2025-09-20 09:58:31.182713 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-20 09:58:31.182723 | orchestrator | Saturday 20 September 2025 09:56:55 +0000 (0:00:02.193) 0:02:32.907 **** 2025-09-20 09:58:31.182732 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:31.182742 | orchestrator | 2025-09-20 09:58:31.182751 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 09:58:31.182761 | orchestrator | Saturday 20 September 2025 09:57:37 +0000 (0:00:41.544) 0:03:14.452 **** 2025-09-20 09:58:31.182771 | orchestrator | 2025-09-20 09:58:31.182781 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 09:58:31.182790 | orchestrator | Saturday 20 September 2025 09:57:37 +0000 (0:00:00.075) 0:03:14.527 **** 2025-09-20 09:58:31.182800 | orchestrator | 2025-09-20 09:58:31.182809 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 09:58:31.182819 | orchestrator | Saturday 20 September 2025 09:57:37 +0000 (0:00:00.367) 0:03:14.894 **** 2025-09-20 09:58:31.182829 | orchestrator | 2025-09-20 09:58:31.182838 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 09:58:31.182848 | orchestrator | Saturday 20 September 2025 09:57:37 +0000 (0:00:00.067) 0:03:14.961 **** 2025-09-20 09:58:31.182858 | orchestrator | 2025-09-20 09:58:31.182873 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 09:58:31.182884 | orchestrator | Saturday 20 September 2025 09:57:37 +0000 (0:00:00.136) 0:03:15.098 **** 2025-09-20 09:58:31.182893 | orchestrator | 2025-09-20 09:58:31.182903 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-20 09:58:31.182912 | orchestrator | Saturday 20 September 2025 09:57:38 +0000 (0:00:00.205) 0:03:15.304 **** 2025-09-20 09:58:31.182922 | orchestrator | 2025-09-20 09:58:31.182932 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-20 09:58:31.182941 | orchestrator | Saturday 20 September 2025 09:57:38 +0000 (0:00:00.143) 0:03:15.447 **** 2025-09-20 09:58:31.182951 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:58:31.182960 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:58:31.182970 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:58:31.182980 | orchestrator | 2025-09-20 09:58:31.182989 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-20 09:58:31.182999 | orchestrator | Saturday 20 September 2025 09:58:04 +0000 (0:00:26.214) 0:03:41.661 **** 2025-09-20 09:58:31.183009 | orchestrator | changed: [testbed-node-3] 2025-09-20 09:58:31.183018 | orchestrator | changed: [testbed-node-4] 2025-09-20 09:58:31.183028 | orchestrator | changed: [testbed-node-5] 2025-09-20 09:58:31.183037 | orchestrator | 2025-09-20 09:58:31.183047 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:58:31.183057 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:58:31.183073 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-20 09:58:31.183083 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-20 09:58:31.183093 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:58:31.183103 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:58:31.183112 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-20 09:58:31.183122 | orchestrator | 2025-09-20 09:58:31.183132 | orchestrator | 2025-09-20 09:58:31.183141 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:58:31.183151 | orchestrator | Saturday 20 September 2025 09:58:28 +0000 (0:00:24.444) 0:04:06.105 **** 2025-09-20 09:58:31.183161 | orchestrator | =============================================================================== 2025-09-20 09:58:31.183170 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.54s 2025-09-20 09:58:31.183180 | orchestrator | neutron : Restart neutron-server container ----------------------------- 26.21s 2025-09-20 09:58:31.183189 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 24.44s 2025-09-20 09:58:31.183199 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.71s 2025-09-20 09:58:31.183209 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.06s 2025-09-20 09:58:31.183218 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.46s 2025-09-20 09:58:31.183228 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.38s 2025-09-20 09:58:31.183238 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 5.07s 2025-09-20 09:58:31.183263 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.77s 2025-09-20 09:58:31.183282 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.59s 2025-09-20 09:58:31.183292 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.37s 2025-09-20 09:58:31.183302 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.30s 2025-09-20 09:58:31.183311 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 3.63s 2025-09-20 09:58:31.183321 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.55s 2025-09-20 09:58:31.183330 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.53s 2025-09-20 09:58:31.183340 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.47s 2025-09-20 09:58:31.183349 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.44s 2025-09-20 09:58:31.183359 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.28s 2025-09-20 09:58:31.183368 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.15s 2025-09-20 09:58:31.183378 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.15s 2025-09-20 09:58:31.183388 | orchestrator | 2025-09-20 09:58:31 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:31.183398 | orchestrator | 2025-09-20 09:58:31 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:31.183408 | orchestrator | 2025-09-20 09:58:31 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:31.183428 | orchestrator | 2025-09-20 09:58:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:34.233657 | orchestrator | 2025-09-20 09:58:34 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:34.234398 | orchestrator | 2025-09-20 09:58:34 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:34.235982 | orchestrator | 2025-09-20 09:58:34 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:34.237689 | orchestrator | 2025-09-20 09:58:34 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:34.237884 | orchestrator | 2025-09-20 09:58:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:37.279096 | orchestrator | 2025-09-20 09:58:37 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:37.280698 | orchestrator | 2025-09-20 09:58:37 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:37.280731 | orchestrator | 2025-09-20 09:58:37 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:37.280795 | orchestrator | 2025-09-20 09:58:37 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:37.280810 | orchestrator | 2025-09-20 09:58:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:40.309738 | orchestrator | 2025-09-20 09:58:40 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:40.310405 | orchestrator | 2025-09-20 09:58:40 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:40.319423 | orchestrator | 2025-09-20 09:58:40 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:40.320671 | orchestrator | 2025-09-20 09:58:40 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:40.320715 | orchestrator | 2025-09-20 09:58:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:43.363084 | orchestrator | 2025-09-20 09:58:43 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:43.365491 | orchestrator | 2025-09-20 09:58:43 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:43.366831 | orchestrator | 2025-09-20 09:58:43 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:43.368666 | orchestrator | 2025-09-20 09:58:43 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:43.369047 | orchestrator | 2025-09-20 09:58:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:46.409521 | orchestrator | 2025-09-20 09:58:46 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:46.415833 | orchestrator | 2025-09-20 09:58:46 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:46.421058 | orchestrator | 2025-09-20 09:58:46 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:46.422373 | orchestrator | 2025-09-20 09:58:46 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:46.422395 | orchestrator | 2025-09-20 09:58:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:49.468874 | orchestrator | 2025-09-20 09:58:49 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:49.470350 | orchestrator | 2025-09-20 09:58:49 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:49.471708 | orchestrator | 2025-09-20 09:58:49 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:49.473338 | orchestrator | 2025-09-20 09:58:49 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:49.473373 | orchestrator | 2025-09-20 09:58:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:52.508478 | orchestrator | 2025-09-20 09:58:52 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:52.509936 | orchestrator | 2025-09-20 09:58:52 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:52.512590 | orchestrator | 2025-09-20 09:58:52 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:52.513427 | orchestrator | 2025-09-20 09:58:52 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:52.513456 | orchestrator | 2025-09-20 09:58:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:55.553913 | orchestrator | 2025-09-20 09:58:55 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:55.554826 | orchestrator | 2025-09-20 09:58:55 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:55.555334 | orchestrator | 2025-09-20 09:58:55 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:55.556217 | orchestrator | 2025-09-20 09:58:55 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:55.556464 | orchestrator | 2025-09-20 09:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:58:58.616794 | orchestrator | 2025-09-20 09:58:58 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:58:58.623813 | orchestrator | 2025-09-20 09:58:58 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:58:58.628036 | orchestrator | 2025-09-20 09:58:58 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:58:58.639118 | orchestrator | 2025-09-20 09:58:58 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:58:58.642440 | orchestrator | 2025-09-20 09:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:01.686141 | orchestrator | 2025-09-20 09:59:01 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:01.687177 | orchestrator | 2025-09-20 09:59:01 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:59:01.688684 | orchestrator | 2025-09-20 09:59:01 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:01.690105 | orchestrator | 2025-09-20 09:59:01 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:01.690505 | orchestrator | 2025-09-20 09:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:04.732826 | orchestrator | 2025-09-20 09:59:04 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:04.733433 | orchestrator | 2025-09-20 09:59:04 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:59:04.734968 | orchestrator | 2025-09-20 09:59:04 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:04.736470 | orchestrator | 2025-09-20 09:59:04 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:04.736494 | orchestrator | 2025-09-20 09:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:07.778939 | orchestrator | 2025-09-20 09:59:07 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:07.780508 | orchestrator | 2025-09-20 09:59:07 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:59:07.781375 | orchestrator | 2025-09-20 09:59:07 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:07.783030 | orchestrator | 2025-09-20 09:59:07 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:07.783073 | orchestrator | 2025-09-20 09:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:10.821418 | orchestrator | 2025-09-20 09:59:10 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:10.822630 | orchestrator | 2025-09-20 09:59:10 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state STARTED 2025-09-20 09:59:10.824161 | orchestrator | 2025-09-20 09:59:10 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:10.826142 | orchestrator | 2025-09-20 09:59:10 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:10.826190 | orchestrator | 2025-09-20 09:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:13.867372 | orchestrator | 2025-09-20 09:59:13 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:13.868196 | orchestrator | 2025-09-20 09:59:13 | INFO  | Task 1c6d2f6a-ce5d-43c1-ba34-87943aee06e5 is in state SUCCESS 2025-09-20 09:59:13.869630 | orchestrator | 2025-09-20 09:59:13.869664 | orchestrator | 2025-09-20 09:59:13.869676 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 09:59:13.869688 | orchestrator | 2025-09-20 09:59:13.869699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 09:59:13.869710 | orchestrator | Saturday 20 September 2025 09:58:05 +0000 (0:00:00.340) 0:00:00.340 **** 2025-09-20 09:59:13.869722 | orchestrator | ok: [testbed-node-0] 2025-09-20 09:59:13.869734 | orchestrator | ok: [testbed-node-1] 2025-09-20 09:59:13.869745 | orchestrator | ok: [testbed-node-2] 2025-09-20 09:59:13.869756 | orchestrator | 2025-09-20 09:59:13.869767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 09:59:13.869778 | orchestrator | Saturday 20 September 2025 09:58:05 +0000 (0:00:00.617) 0:00:00.958 **** 2025-09-20 09:59:13.869790 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-20 09:59:13.869801 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-20 09:59:13.869812 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-20 09:59:13.869823 | orchestrator | 2025-09-20 09:59:13.869834 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-20 09:59:13.869845 | orchestrator | 2025-09-20 09:59:13.869855 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-20 09:59:13.869866 | orchestrator | Saturday 20 September 2025 09:58:06 +0000 (0:00:00.433) 0:00:01.391 **** 2025-09-20 09:59:13.869877 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:59:13.869889 | orchestrator | 2025-09-20 09:59:13.869899 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-20 09:59:13.869910 | orchestrator | Saturday 20 September 2025 09:58:06 +0000 (0:00:00.513) 0:00:01.905 **** 2025-09-20 09:59:13.869921 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-20 09:59:13.869932 | orchestrator | 2025-09-20 09:59:13.869943 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-20 09:59:13.869953 | orchestrator | Saturday 20 September 2025 09:58:10 +0000 (0:00:03.493) 0:00:05.398 **** 2025-09-20 09:59:13.869964 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-20 09:59:13.869975 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-20 09:59:13.870066 | orchestrator | 2025-09-20 09:59:13.870084 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-20 09:59:13.870095 | orchestrator | Saturday 20 September 2025 09:58:17 +0000 (0:00:06.919) 0:00:12.318 **** 2025-09-20 09:59:13.870106 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 09:59:13.870117 | orchestrator | 2025-09-20 09:59:13.870128 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-20 09:59:13.870139 | orchestrator | Saturday 20 September 2025 09:58:20 +0000 (0:00:03.088) 0:00:15.406 **** 2025-09-20 09:59:13.870150 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 09:59:13.870160 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-20 09:59:13.870171 | orchestrator | 2025-09-20 09:59:13.870182 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-20 09:59:13.870193 | orchestrator | Saturday 20 September 2025 09:58:23 +0000 (0:00:03.837) 0:00:19.244 **** 2025-09-20 09:59:13.870203 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 09:59:13.870215 | orchestrator | 2025-09-20 09:59:13.870226 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-20 09:59:13.870265 | orchestrator | Saturday 20 September 2025 09:58:27 +0000 (0:00:03.513) 0:00:22.758 **** 2025-09-20 09:59:13.870278 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-20 09:59:13.870291 | orchestrator | 2025-09-20 09:59:13.870303 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-20 09:59:13.870316 | orchestrator | Saturday 20 September 2025 09:58:32 +0000 (0:00:04.737) 0:00:27.495 **** 2025-09-20 09:59:13.870328 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:59:13.870340 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:59:13.870352 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:59:13.870364 | orchestrator | 2025-09-20 09:59:13.870377 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-20 09:59:13.870426 | orchestrator | Saturday 20 September 2025 09:58:32 +0000 (0:00:00.290) 0:00:27.786 **** 2025-09-20 09:59:13.870459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.870492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.870517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.870530 | orchestrator | 2025-09-20 09:59:13.870543 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-20 09:59:13.870555 | orchestrator | Saturday 20 September 2025 09:58:33 +0000 (0:00:00.863) 0:00:28.649 **** 2025-09-20 09:59:13.870568 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:59:13.870580 | orchestrator | 2025-09-20 09:59:13.870593 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-20 09:59:13.870605 | orchestrator | Saturday 20 September 2025 09:58:33 +0000 (0:00:00.125) 0:00:28.774 **** 2025-09-20 09:59:13.870615 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:59:13.870626 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:59:13.870637 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:59:13.870647 | orchestrator | 2025-09-20 09:59:13.870658 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-20 09:59:13.870669 | orchestrator | Saturday 20 September 2025 09:58:33 +0000 (0:00:00.511) 0:00:29.286 **** 2025-09-20 09:59:13.870679 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 09:59:13.870690 | orchestrator | 2025-09-20 09:59:13.870700 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-20 09:59:13.870711 | orchestrator | Saturday 20 September 2025 09:58:34 +0000 (0:00:00.570) 0:00:29.856 **** 2025-09-20 09:59:13.870776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.870800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.870819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.870831 | orchestrator | 2025-09-20 09:59:13.870842 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-20 09:59:13.870853 | orchestrator | Saturday 20 September 2025 09:58:36 +0000 (0:00:01.622) 0:00:31.479 **** 2025-09-20 09:59:13.870864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.870910 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:59:13.870922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.870939 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:59:13.870958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.870992 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:59:13.871004 | orchestrator | 2025-09-20 09:59:13.871015 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-20 09:59:13.871026 | orchestrator | Saturday 20 September 2025 09:58:37 +0000 (0:00:00.904) 0:00:32.384 **** 2025-09-20 09:59:13.871038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.871049 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:59:13.871061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.871072 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:59:13.871084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.871096 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:59:13.871107 | orchestrator | 2025-09-20 09:59:13.871117 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-20 09:59:13.871128 | orchestrator | Saturday 20 September 2025 09:58:37 +0000 (0:00:00.641) 0:00:33.026 **** 2025-09-20 09:59:13.871163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871207 | orchestrator | 2025-09-20 09:59:13.871218 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-20 09:59:13.871247 | orchestrator | Saturday 20 September 2025 09:58:39 +0000 (0:00:01.554) 0:00:34.580 **** 2025-09-20 09:59:13.871260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871317 | orchestrator | 2025-09-20 09:59:13.871328 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-20 09:59:13.871339 | orchestrator | Saturday 20 September 2025 09:58:41 +0000 (0:00:02.391) 0:00:36.972 **** 2025-09-20 09:59:13.871350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-20 09:59:13.871361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-20 09:59:13.871372 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-20 09:59:13.871382 | orchestrator | 2025-09-20 09:59:13.871393 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-20 09:59:13.871404 | orchestrator | Saturday 20 September 2025 09:58:43 +0000 (0:00:01.735) 0:00:38.708 **** 2025-09-20 09:59:13.871414 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:59:13.871425 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:59:13.871436 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:59:13.871446 | orchestrator | 2025-09-20 09:59:13.871457 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-20 09:59:13.871468 | orchestrator | Saturday 20 September 2025 09:58:44 +0000 (0:00:01.246) 0:00:39.954 **** 2025-09-20 09:59:13.871479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.871490 | orchestrator | skipping: [testbed-node-0] 2025-09-20 09:59:13.871507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.871524 | orchestrator | skipping: [testbed-node-1] 2025-09-20 09:59:13.871543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-20 09:59:13.871555 | orchestrator | skipping: [testbed-node-2] 2025-09-20 09:59:13.871566 | orchestrator | 2025-09-20 09:59:13.871577 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-20 09:59:13.871588 | orchestrator | Saturday 20 September 2025 09:58:45 +0000 (0:00:00.473) 0:00:40.427 **** 2025-09-20 09:59:13.871599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-20 09:59:13.871640 | orchestrator | 2025-09-20 09:59:13.871651 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-20 09:59:13.871672 | orchestrator | Saturday 20 September 2025 09:58:46 +0000 (0:00:01.063) 0:00:41.491 **** 2025-09-20 09:59:13.871683 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:59:13.871694 | orchestrator | 2025-09-20 09:59:13.871704 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-20 09:59:13.871715 | orchestrator | Saturday 20 September 2025 09:58:48 +0000 (0:00:02.272) 0:00:43.764 **** 2025-09-20 09:59:13.871726 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:59:13.871736 | orchestrator | 2025-09-20 09:59:13.871747 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-20 09:59:13.871758 | orchestrator | Saturday 20 September 2025 09:58:51 +0000 (0:00:02.643) 0:00:46.407 **** 2025-09-20 09:59:13.871768 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:59:13.871779 | orchestrator | 2025-09-20 09:59:13.871789 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-20 09:59:13.871800 | orchestrator | Saturday 20 September 2025 09:59:04 +0000 (0:00:13.484) 0:00:59.891 **** 2025-09-20 09:59:13.871811 | orchestrator | 2025-09-20 09:59:13.871821 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-20 09:59:13.871832 | orchestrator | Saturday 20 September 2025 09:59:04 +0000 (0:00:00.068) 0:00:59.960 **** 2025-09-20 09:59:13.871842 | orchestrator | 2025-09-20 09:59:13.871860 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-20 09:59:13.871872 | orchestrator | Saturday 20 September 2025 09:59:04 +0000 (0:00:00.080) 0:01:00.040 **** 2025-09-20 09:59:13.871882 | orchestrator | 2025-09-20 09:59:13.871893 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-20 09:59:13.871904 | orchestrator | Saturday 20 September 2025 09:59:04 +0000 (0:00:00.075) 0:01:00.116 **** 2025-09-20 09:59:13.871915 | orchestrator | changed: [testbed-node-1] 2025-09-20 09:59:13.871926 | orchestrator | changed: [testbed-node-2] 2025-09-20 09:59:13.871936 | orchestrator | changed: [testbed-node-0] 2025-09-20 09:59:13.871947 | orchestrator | 2025-09-20 09:59:13.871958 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 09:59:13.871970 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 09:59:13.871982 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:59:13.871993 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 09:59:13.872003 | orchestrator | 2025-09-20 09:59:13.872014 | orchestrator | 2025-09-20 09:59:13.872025 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 09:59:13.872035 | orchestrator | Saturday 20 September 2025 09:59:12 +0000 (0:00:08.109) 0:01:08.225 **** 2025-09-20 09:59:13.872046 | orchestrator | =============================================================================== 2025-09-20 09:59:13.872057 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.48s 2025-09-20 09:59:13.872067 | orchestrator | placement : Restart placement-api container ----------------------------- 8.11s 2025-09-20 09:59:13.872078 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.92s 2025-09-20 09:59:13.872088 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.74s 2025-09-20 09:59:13.872117 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.84s 2025-09-20 09:59:13.872128 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.51s 2025-09-20 09:59:13.872139 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.49s 2025-09-20 09:59:13.872150 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.09s 2025-09-20 09:59:13.872160 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.64s 2025-09-20 09:59:13.872171 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.39s 2025-09-20 09:59:13.872182 | orchestrator | placement : Creating placement databases -------------------------------- 2.27s 2025-09-20 09:59:13.872192 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.74s 2025-09-20 09:59:13.872203 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.62s 2025-09-20 09:59:13.872214 | orchestrator | placement : Copying over config.json files for services ----------------- 1.55s 2025-09-20 09:59:13.872224 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.25s 2025-09-20 09:59:13.872285 | orchestrator | placement : Check placement containers ---------------------------------- 1.06s 2025-09-20 09:59:13.872297 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.91s 2025-09-20 09:59:13.872308 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.86s 2025-09-20 09:59:13.872318 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.64s 2025-09-20 09:59:13.872329 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2025-09-20 09:59:13.872340 | orchestrator | 2025-09-20 09:59:13 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:13.872451 | orchestrator | 2025-09-20 09:59:13 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:13.872466 | orchestrator | 2025-09-20 09:59:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:16.916305 | orchestrator | 2025-09-20 09:59:16 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:16.916837 | orchestrator | 2025-09-20 09:59:16 | INFO  | Task 36237e8e-b1a2-4598-beae-0b533b73c1eb is in state STARTED 2025-09-20 09:59:16.917810 | orchestrator | 2025-09-20 09:59:16 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:16.918746 | orchestrator | 2025-09-20 09:59:16 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:16.918770 | orchestrator | 2025-09-20 09:59:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:19.975021 | orchestrator | 2025-09-20 09:59:19 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:19.975887 | orchestrator | 2025-09-20 09:59:19 | INFO  | Task 36237e8e-b1a2-4598-beae-0b533b73c1eb is in state SUCCESS 2025-09-20 09:59:19.978620 | orchestrator | 2025-09-20 09:59:19 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:19.979464 | orchestrator | 2025-09-20 09:59:19 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:19.979500 | orchestrator | 2025-09-20 09:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:23.021116 | orchestrator | 2025-09-20 09:59:23 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:23.021218 | orchestrator | 2025-09-20 09:59:23 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:23.024133 | orchestrator | 2025-09-20 09:59:23 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:23.024899 | orchestrator | 2025-09-20 09:59:23 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:23.024933 | orchestrator | 2025-09-20 09:59:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:26.066478 | orchestrator | 2025-09-20 09:59:26 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:26.071465 | orchestrator | 2025-09-20 09:59:26 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:26.071800 | orchestrator | 2025-09-20 09:59:26 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:26.073135 | orchestrator | 2025-09-20 09:59:26 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:26.073189 | orchestrator | 2025-09-20 09:59:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:29.127607 | orchestrator | 2025-09-20 09:59:29 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:29.128855 | orchestrator | 2025-09-20 09:59:29 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:29.130563 | orchestrator | 2025-09-20 09:59:29 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:29.131807 | orchestrator | 2025-09-20 09:59:29 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:29.132098 | orchestrator | 2025-09-20 09:59:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:32.182209 | orchestrator | 2025-09-20 09:59:32 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:32.182890 | orchestrator | 2025-09-20 09:59:32 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:32.184221 | orchestrator | 2025-09-20 09:59:32 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:32.186276 | orchestrator | 2025-09-20 09:59:32 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:32.186655 | orchestrator | 2025-09-20 09:59:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:35.227290 | orchestrator | 2025-09-20 09:59:35 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:35.228108 | orchestrator | 2025-09-20 09:59:35 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:35.230133 | orchestrator | 2025-09-20 09:59:35 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:35.231526 | orchestrator | 2025-09-20 09:59:35 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:35.232070 | orchestrator | 2025-09-20 09:59:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:38.285122 | orchestrator | 2025-09-20 09:59:38 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:38.287357 | orchestrator | 2025-09-20 09:59:38 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:38.288621 | orchestrator | 2025-09-20 09:59:38 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:38.291109 | orchestrator | 2025-09-20 09:59:38 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:38.291147 | orchestrator | 2025-09-20 09:59:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:41.336522 | orchestrator | 2025-09-20 09:59:41 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:41.336731 | orchestrator | 2025-09-20 09:59:41 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:41.336850 | orchestrator | 2025-09-20 09:59:41 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:41.337806 | orchestrator | 2025-09-20 09:59:41 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:41.337899 | orchestrator | 2025-09-20 09:59:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:44.391126 | orchestrator | 2025-09-20 09:59:44 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:44.392254 | orchestrator | 2025-09-20 09:59:44 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:44.394006 | orchestrator | 2025-09-20 09:59:44 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:44.395805 | orchestrator | 2025-09-20 09:59:44 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:44.396432 | orchestrator | 2025-09-20 09:59:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:47.443890 | orchestrator | 2025-09-20 09:59:47 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:47.444964 | orchestrator | 2025-09-20 09:59:47 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:47.447121 | orchestrator | 2025-09-20 09:59:47 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:47.448821 | orchestrator | 2025-09-20 09:59:47 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:47.448845 | orchestrator | 2025-09-20 09:59:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:50.495656 | orchestrator | 2025-09-20 09:59:50 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:50.496578 | orchestrator | 2025-09-20 09:59:50 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:50.497524 | orchestrator | 2025-09-20 09:59:50 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:50.499137 | orchestrator | 2025-09-20 09:59:50 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:50.499162 | orchestrator | 2025-09-20 09:59:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:53.549975 | orchestrator | 2025-09-20 09:59:53 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:53.550676 | orchestrator | 2025-09-20 09:59:53 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:53.551354 | orchestrator | 2025-09-20 09:59:53 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:53.552378 | orchestrator | 2025-09-20 09:59:53 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:53.552408 | orchestrator | 2025-09-20 09:59:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:56.584051 | orchestrator | 2025-09-20 09:59:56 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:56.584852 | orchestrator | 2025-09-20 09:59:56 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:56.585543 | orchestrator | 2025-09-20 09:59:56 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:56.586886 | orchestrator | 2025-09-20 09:59:56 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:56.587637 | orchestrator | 2025-09-20 09:59:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 09:59:59.641926 | orchestrator | 2025-09-20 09:59:59 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 09:59:59.643375 | orchestrator | 2025-09-20 09:59:59 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 09:59:59.645399 | orchestrator | 2025-09-20 09:59:59 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 09:59:59.647209 | orchestrator | 2025-09-20 09:59:59 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 09:59:59.647261 | orchestrator | 2025-09-20 09:59:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:02.687702 | orchestrator | 2025-09-20 10:00:02 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:02.687806 | orchestrator | 2025-09-20 10:00:02 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:02.689736 | orchestrator | 2025-09-20 10:00:02 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 10:00:02.690613 | orchestrator | 2025-09-20 10:00:02 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:02.690639 | orchestrator | 2025-09-20 10:00:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:05.732492 | orchestrator | 2025-09-20 10:00:05 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:05.734586 | orchestrator | 2025-09-20 10:00:05 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:05.737441 | orchestrator | 2025-09-20 10:00:05 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 10:00:05.738688 | orchestrator | 2025-09-20 10:00:05 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:05.738766 | orchestrator | 2025-09-20 10:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:08.784900 | orchestrator | 2025-09-20 10:00:08 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:08.789036 | orchestrator | 2025-09-20 10:00:08 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:08.791512 | orchestrator | 2025-09-20 10:00:08 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state STARTED 2025-09-20 10:00:08.793675 | orchestrator | 2025-09-20 10:00:08 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:08.793744 | orchestrator | 2025-09-20 10:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:11.845882 | orchestrator | 2025-09-20 10:00:11 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:11.847981 | orchestrator | 2025-09-20 10:00:11 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:11.851061 | orchestrator | 2025-09-20 10:00:11 | INFO  | Task 0549f771-b104-4f74-90a6-64f42b88bd35 is in state SUCCESS 2025-09-20 10:00:11.852967 | orchestrator | 2025-09-20 10:00:11.852997 | orchestrator | 2025-09-20 10:00:11.853009 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:00:11.853020 | orchestrator | 2025-09-20 10:00:11.853031 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:00:11.853042 | orchestrator | Saturday 20 September 2025 09:59:17 +0000 (0:00:00.186) 0:00:00.186 **** 2025-09-20 10:00:11.853054 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:11.853066 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:00:11.853077 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:00:11.853087 | orchestrator | 2025-09-20 10:00:11.853098 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:00:11.853178 | orchestrator | Saturday 20 September 2025 09:59:18 +0000 (0:00:00.335) 0:00:00.522 **** 2025-09-20 10:00:11.853192 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-20 10:00:11.853203 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-20 10:00:11.853288 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-20 10:00:11.853302 | orchestrator | 2025-09-20 10:00:11.853313 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-20 10:00:11.853324 | orchestrator | 2025-09-20 10:00:11.853335 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-20 10:00:11.853346 | orchestrator | Saturday 20 September 2025 09:59:18 +0000 (0:00:00.672) 0:00:01.194 **** 2025-09-20 10:00:11.853356 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:11.853367 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:00:11.853377 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:00:11.853388 | orchestrator | 2025-09-20 10:00:11.853398 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:00:11.853410 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:00:11.853422 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:00:11.853433 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:00:11.853444 | orchestrator | 2025-09-20 10:00:11.853455 | orchestrator | 2025-09-20 10:00:11.853465 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:00:11.853488 | orchestrator | Saturday 20 September 2025 09:59:19 +0000 (0:00:00.679) 0:00:01.873 **** 2025-09-20 10:00:11.853499 | orchestrator | =============================================================================== 2025-09-20 10:00:11.853509 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.68s 2025-09-20 10:00:11.853520 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-09-20 10:00:11.853531 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-20 10:00:11.853541 | orchestrator | 2025-09-20 10:00:11.853552 | orchestrator | 2025-09-20 10:00:11.853563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:00:11.853573 | orchestrator | 2025-09-20 10:00:11.853587 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:00:11.853599 | orchestrator | Saturday 20 September 2025 09:58:17 +0000 (0:00:00.236) 0:00:00.236 **** 2025-09-20 10:00:11.853612 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:11.853624 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:00:11.853636 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:00:11.853648 | orchestrator | 2025-09-20 10:00:11.853660 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:00:11.853672 | orchestrator | Saturday 20 September 2025 09:58:17 +0000 (0:00:00.274) 0:00:00.511 **** 2025-09-20 10:00:11.853685 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-20 10:00:11.853698 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-20 10:00:11.853710 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-20 10:00:11.853722 | orchestrator | 2025-09-20 10:00:11.853734 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-20 10:00:11.853746 | orchestrator | 2025-09-20 10:00:11.853758 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-20 10:00:11.853771 | orchestrator | Saturday 20 September 2025 09:58:18 +0000 (0:00:00.376) 0:00:00.887 **** 2025-09-20 10:00:11.853820 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:00:11.853833 | orchestrator | 2025-09-20 10:00:11.853846 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-20 10:00:11.853868 | orchestrator | Saturday 20 September 2025 09:58:18 +0000 (0:00:00.528) 0:00:01.416 **** 2025-09-20 10:00:11.853881 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-20 10:00:11.853894 | orchestrator | 2025-09-20 10:00:11.853905 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-20 10:00:11.853919 | orchestrator | Saturday 20 September 2025 09:58:22 +0000 (0:00:03.237) 0:00:04.654 **** 2025-09-20 10:00:11.853931 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-20 10:00:11.853942 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-20 10:00:11.853953 | orchestrator | 2025-09-20 10:00:11.853964 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-20 10:00:11.853974 | orchestrator | Saturday 20 September 2025 09:58:29 +0000 (0:00:07.167) 0:00:11.822 **** 2025-09-20 10:00:11.853985 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 10:00:11.853995 | orchestrator | 2025-09-20 10:00:11.854006 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-20 10:00:11.854056 | orchestrator | Saturday 20 September 2025 09:58:32 +0000 (0:00:03.628) 0:00:15.451 **** 2025-09-20 10:00:11.854083 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 10:00:11.854099 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-20 10:00:11.854119 | orchestrator | 2025-09-20 10:00:11.854146 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-20 10:00:11.854166 | orchestrator | Saturday 20 September 2025 09:58:37 +0000 (0:00:04.256) 0:00:19.707 **** 2025-09-20 10:00:11.854184 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 10:00:11.854202 | orchestrator | 2025-09-20 10:00:11.854246 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-20 10:00:11.854263 | orchestrator | Saturday 20 September 2025 09:58:40 +0000 (0:00:03.578) 0:00:23.285 **** 2025-09-20 10:00:11.854283 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-20 10:00:11.854301 | orchestrator | 2025-09-20 10:00:11.854319 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-20 10:00:11.854338 | orchestrator | Saturday 20 September 2025 09:58:44 +0000 (0:00:03.756) 0:00:27.042 **** 2025-09-20 10:00:11.854356 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.854374 | orchestrator | 2025-09-20 10:00:11.854393 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-20 10:00:11.854411 | orchestrator | Saturday 20 September 2025 09:58:47 +0000 (0:00:02.776) 0:00:29.818 **** 2025-09-20 10:00:11.854431 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.854450 | orchestrator | 2025-09-20 10:00:11.854469 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-20 10:00:11.854489 | orchestrator | Saturday 20 September 2025 09:58:51 +0000 (0:00:03.963) 0:00:33.782 **** 2025-09-20 10:00:11.854508 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.854527 | orchestrator | 2025-09-20 10:00:11.854546 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-20 10:00:11.854565 | orchestrator | Saturday 20 September 2025 09:58:55 +0000 (0:00:03.900) 0:00:37.683 **** 2025-09-20 10:00:11.854596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.854622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.854634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.854655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.854669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.854684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.854702 | orchestrator | 2025-09-20 10:00:11.854713 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-20 10:00:11.854724 | orchestrator | Saturday 20 September 2025 09:58:56 +0000 (0:00:01.583) 0:00:39.266 **** 2025-09-20 10:00:11.854735 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:11.854746 | orchestrator | 2025-09-20 10:00:11.854757 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-20 10:00:11.854767 | orchestrator | Saturday 20 September 2025 09:58:56 +0000 (0:00:00.133) 0:00:39.400 **** 2025-09-20 10:00:11.854778 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:11.854788 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:11.854798 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:11.854809 | orchestrator | 2025-09-20 10:00:11.854819 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-20 10:00:11.854830 | orchestrator | Saturday 20 September 2025 09:58:57 +0000 (0:00:00.521) 0:00:39.921 **** 2025-09-20 10:00:11.854840 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:00:11.854851 | orchestrator | 2025-09-20 10:00:11.854862 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-20 10:00:11.854872 | orchestrator | Saturday 20 September 2025 09:58:58 +0000 (0:00:01.083) 0:00:41.005 **** 2025-09-20 10:00:11.854883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.854904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.854916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.854939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.854950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.854962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.854973 | orchestrator | 2025-09-20 10:00:11.854983 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-20 10:00:11.854994 | orchestrator | Saturday 20 September 2025 09:59:01 +0000 (0:00:02.589) 0:00:43.594 **** 2025-09-20 10:00:11.855005 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:11.855016 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:00:11.855026 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:00:11.855037 | orchestrator | 2025-09-20 10:00:11.855048 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-20 10:00:11.855064 | orchestrator | Saturday 20 September 2025 09:59:01 +0000 (0:00:00.297) 0:00:43.891 **** 2025-09-20 10:00:11.855075 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:00:11.855086 | orchestrator | 2025-09-20 10:00:11.855097 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-20 10:00:11.855108 | orchestrator | Saturday 20 September 2025 09:59:02 +0000 (0:00:00.797) 0:00:44.689 **** 2025-09-20 10:00:11.855119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855232 | orchestrator | 2025-09-20 10:00:11.855244 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-20 10:00:11.855255 | orchestrator | Saturday 20 September 2025 09:59:04 +0000 (0:00:02.464) 0:00:47.153 **** 2025-09-20 10:00:11.855281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855304 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:11.855316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855353 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:11.855364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855397 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:11.855408 | orchestrator | 2025-09-20 10:00:11.855419 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-20 10:00:11.855430 | orchestrator | Saturday 20 September 2025 09:59:05 +0000 (0:00:00.893) 0:00:48.047 **** 2025-09-20 10:00:11.855441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855464 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:11.855481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855516 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:11.855527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855550 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:11.855560 | orchestrator | 2025-09-20 10:00:11.855571 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-20 10:00:11.855582 | orchestrator | Saturday 20 September 2025 09:59:06 +0000 (0:00:01.326) 0:00:49.374 **** 2025-09-20 10:00:11.855599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855695 | orchestrator | 2025-09-20 10:00:11.855706 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-20 10:00:11.855717 | orchestrator | Saturday 20 September 2025 09:59:09 +0000 (0:00:02.667) 0:00:52.041 **** 2025-09-20 10:00:11.855729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.855767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.855856 | orchestrator | 2025-09-20 10:00:11.855868 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-20 10:00:11.855878 | orchestrator | Saturday 20 September 2025 09:59:14 +0000 (0:00:05.414) 0:00:57.455 **** 2025-09-20 10:00:11.855894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855917 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:11.855928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.855965 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:11.855976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-20 10:00:11.855992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:00:11.856003 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:11.856014 | orchestrator | 2025-09-20 10:00:11.856025 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-20 10:00:11.856036 | orchestrator | Saturday 20 September 2025 09:59:15 +0000 (0:00:00.724) 0:00:58.180 **** 2025-09-20 10:00:11.856047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.856071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.856083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-20 10:00:11.856098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.856110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.856121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:00:11.856138 | orchestrator | 2025-09-20 10:00:11.856150 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-20 10:00:11.856161 | orchestrator | Saturday 20 September 2025 09:59:18 +0000 (0:00:02.898) 0:01:01.078 **** 2025-09-20 10:00:11.856171 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:11.856182 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:11.856193 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:11.856204 | orchestrator | 2025-09-20 10:00:11.856254 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-20 10:00:11.856268 | orchestrator | Saturday 20 September 2025 09:59:18 +0000 (0:00:00.301) 0:01:01.380 **** 2025-09-20 10:00:11.856278 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.856289 | orchestrator | 2025-09-20 10:00:11.856300 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-20 10:00:11.856311 | orchestrator | Saturday 20 September 2025 09:59:21 +0000 (0:00:02.225) 0:01:03.606 **** 2025-09-20 10:00:11.856322 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.856333 | orchestrator | 2025-09-20 10:00:11.856344 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-20 10:00:11.856355 | orchestrator | Saturday 20 September 2025 09:59:23 +0000 (0:00:02.111) 0:01:05.718 **** 2025-09-20 10:00:11.856374 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.856385 | orchestrator | 2025-09-20 10:00:11.856397 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-20 10:00:11.856407 | orchestrator | Saturday 20 September 2025 09:59:39 +0000 (0:00:16.747) 0:01:22.465 **** 2025-09-20 10:00:11.856418 | orchestrator | 2025-09-20 10:00:11.856429 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-20 10:00:11.856440 | orchestrator | Saturday 20 September 2025 09:59:39 +0000 (0:00:00.068) 0:01:22.534 **** 2025-09-20 10:00:11.856451 | orchestrator | 2025-09-20 10:00:11.856462 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-20 10:00:11.856472 | orchestrator | Saturday 20 September 2025 09:59:40 +0000 (0:00:00.069) 0:01:22.604 **** 2025-09-20 10:00:11.856483 | orchestrator | 2025-09-20 10:00:11.856494 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-20 10:00:11.856505 | orchestrator | Saturday 20 September 2025 09:59:40 +0000 (0:00:00.067) 0:01:22.671 **** 2025-09-20 10:00:11.856516 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.856526 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:00:11.856537 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:00:11.856548 | orchestrator | 2025-09-20 10:00:11.856559 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-20 10:00:11.856569 | orchestrator | Saturday 20 September 2025 09:59:59 +0000 (0:00:19.604) 0:01:42.276 **** 2025-09-20 10:00:11.856580 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:11.856591 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:00:11.856602 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:00:11.856612 | orchestrator | 2025-09-20 10:00:11.856623 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:00:11.856635 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-20 10:00:11.856646 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:00:11.856657 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:00:11.856675 | orchestrator | 2025-09-20 10:00:11.856686 | orchestrator | 2025-09-20 10:00:11.856705 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:00:11.856716 | orchestrator | Saturday 20 September 2025 10:00:09 +0000 (0:00:10.133) 0:01:52.409 **** 2025-09-20 10:00:11.856727 | orchestrator | =============================================================================== 2025-09-20 10:00:11.856738 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.60s 2025-09-20 10:00:11.856749 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.75s 2025-09-20 10:00:11.856760 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.13s 2025-09-20 10:00:11.856770 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.17s 2025-09-20 10:00:11.856781 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.41s 2025-09-20 10:00:11.856792 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.26s 2025-09-20 10:00:11.856803 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.96s 2025-09-20 10:00:11.856814 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.90s 2025-09-20 10:00:11.856824 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.76s 2025-09-20 10:00:11.856835 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.63s 2025-09-20 10:00:11.856846 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.58s 2025-09-20 10:00:11.856857 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.24s 2025-09-20 10:00:11.856867 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.90s 2025-09-20 10:00:11.856878 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.78s 2025-09-20 10:00:11.856889 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.67s 2025-09-20 10:00:11.856899 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.59s 2025-09-20 10:00:11.856910 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.46s 2025-09-20 10:00:11.856921 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.23s 2025-09-20 10:00:11.856932 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.11s 2025-09-20 10:00:11.856942 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.58s 2025-09-20 10:00:11.856953 | orchestrator | 2025-09-20 10:00:11 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:11.856964 | orchestrator | 2025-09-20 10:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:14.902954 | orchestrator | 2025-09-20 10:00:14 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:14.903379 | orchestrator | 2025-09-20 10:00:14 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:14.905011 | orchestrator | 2025-09-20 10:00:14 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:14.905033 | orchestrator | 2025-09-20 10:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:17.942257 | orchestrator | 2025-09-20 10:00:17 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:17.942521 | orchestrator | 2025-09-20 10:00:17 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:17.943427 | orchestrator | 2025-09-20 10:00:17 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:17.943457 | orchestrator | 2025-09-20 10:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:20.993493 | orchestrator | 2025-09-20 10:00:20 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:20.994141 | orchestrator | 2025-09-20 10:00:20 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:20.994174 | orchestrator | 2025-09-20 10:00:20 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:20.994188 | orchestrator | 2025-09-20 10:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:24.018943 | orchestrator | 2025-09-20 10:00:24 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:24.019750 | orchestrator | 2025-09-20 10:00:24 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:24.020051 | orchestrator | 2025-09-20 10:00:24 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:24.020080 | orchestrator | 2025-09-20 10:00:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:27.056808 | orchestrator | 2025-09-20 10:00:27 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:27.057000 | orchestrator | 2025-09-20 10:00:27 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:27.057578 | orchestrator | 2025-09-20 10:00:27 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:27.057604 | orchestrator | 2025-09-20 10:00:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:30.096479 | orchestrator | 2025-09-20 10:00:30 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:30.097738 | orchestrator | 2025-09-20 10:00:30 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:30.099188 | orchestrator | 2025-09-20 10:00:30 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:30.099244 | orchestrator | 2025-09-20 10:00:30 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:33.147154 | orchestrator | 2025-09-20 10:00:33 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:33.148046 | orchestrator | 2025-09-20 10:00:33 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:33.148754 | orchestrator | 2025-09-20 10:00:33 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:33.148777 | orchestrator | 2025-09-20 10:00:33 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:36.197587 | orchestrator | 2025-09-20 10:00:36 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:36.199513 | orchestrator | 2025-09-20 10:00:36 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:36.201371 | orchestrator | 2025-09-20 10:00:36 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:36.201395 | orchestrator | 2025-09-20 10:00:36 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:39.243291 | orchestrator | 2025-09-20 10:00:39 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:39.245924 | orchestrator | 2025-09-20 10:00:39 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:39.247330 | orchestrator | 2025-09-20 10:00:39 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:39.247368 | orchestrator | 2025-09-20 10:00:39 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:42.285607 | orchestrator | 2025-09-20 10:00:42 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:42.287232 | orchestrator | 2025-09-20 10:00:42 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:42.289673 | orchestrator | 2025-09-20 10:00:42 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:42.289759 | orchestrator | 2025-09-20 10:00:42 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:45.339100 | orchestrator | 2025-09-20 10:00:45 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:45.339997 | orchestrator | 2025-09-20 10:00:45 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:45.341457 | orchestrator | 2025-09-20 10:00:45 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:45.341481 | orchestrator | 2025-09-20 10:00:45 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:48.386002 | orchestrator | 2025-09-20 10:00:48 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:48.388170 | orchestrator | 2025-09-20 10:00:48 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:48.389601 | orchestrator | 2025-09-20 10:00:48 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state STARTED 2025-09-20 10:00:48.389629 | orchestrator | 2025-09-20 10:00:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:51.426260 | orchestrator | 2025-09-20 10:00:51 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:51.427007 | orchestrator | 2025-09-20 10:00:51 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:51.430343 | orchestrator | 2025-09-20 10:00:51 | INFO  | Task 03b9268d-1d55-488f-8cb1-fac9958a214f is in state SUCCESS 2025-09-20 10:00:51.432050 | orchestrator | 2025-09-20 10:00:51.432084 | orchestrator | 2025-09-20 10:00:51.432097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:00:51.432109 | orchestrator | 2025-09-20 10:00:51.432121 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:00:51.432134 | orchestrator | Saturday 20 September 2025 09:58:33 +0000 (0:00:00.289) 0:00:00.289 **** 2025-09-20 10:00:51.432146 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:51.432159 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:00:51.432187 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:00:51.432199 | orchestrator | 2025-09-20 10:00:51.432250 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:00:51.432263 | orchestrator | Saturday 20 September 2025 09:58:34 +0000 (0:00:00.307) 0:00:00.596 **** 2025-09-20 10:00:51.432274 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-20 10:00:51.432286 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-20 10:00:51.432297 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-20 10:00:51.432308 | orchestrator | 2025-09-20 10:00:51.432319 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-20 10:00:51.432330 | orchestrator | 2025-09-20 10:00:51.432341 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-20 10:00:51.432352 | orchestrator | Saturday 20 September 2025 09:58:34 +0000 (0:00:00.451) 0:00:01.048 **** 2025-09-20 10:00:51.432363 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:00:51.432375 | orchestrator | 2025-09-20 10:00:51.432386 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-20 10:00:51.432397 | orchestrator | Saturday 20 September 2025 09:58:35 +0000 (0:00:00.585) 0:00:01.633 **** 2025-09-20 10:00:51.432437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.432484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.432497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.432508 | orchestrator | 2025-09-20 10:00:51.432520 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-20 10:00:51.432531 | orchestrator | Saturday 20 September 2025 09:58:36 +0000 (0:00:00.842) 0:00:02.475 **** 2025-09-20 10:00:51.432542 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-20 10:00:51.432553 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-20 10:00:51.432564 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:00:51.432609 | orchestrator | 2025-09-20 10:00:51.432621 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-20 10:00:51.432633 | orchestrator | Saturday 20 September 2025 09:58:36 +0000 (0:00:00.888) 0:00:03.364 **** 2025-09-20 10:00:51.432647 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:00:51.432661 | orchestrator | 2025-09-20 10:00:51.432674 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-20 10:00:51.432687 | orchestrator | Saturday 20 September 2025 09:58:37 +0000 (0:00:00.674) 0:00:04.039 **** 2025-09-20 10:00:51.432721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.432736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.432758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.432770 | orchestrator | 2025-09-20 10:00:51.432783 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-20 10:00:51.432795 | orchestrator | Saturday 20 September 2025 09:58:39 +0000 (0:00:01.401) 0:00:05.441 **** 2025-09-20 10:00:51.432808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:00:51.432821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:00:51.432834 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.432847 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.432945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:00:51.432960 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.432973 | orchestrator | 2025-09-20 10:00:51.432986 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-20 10:00:51.432997 | orchestrator | Saturday 20 September 2025 09:58:39 +0000 (0:00:00.360) 0:00:05.801 **** 2025-09-20 10:00:51.433014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:00:51.433032 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.433043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:00:51.433054 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.433066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-20 10:00:51.433077 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.433088 | orchestrator | 2025-09-20 10:00:51.433099 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-20 10:00:51.433110 | orchestrator | Saturday 20 September 2025 09:58:40 +0000 (0:00:00.910) 0:00:06.712 **** 2025-09-20 10:00:51.433121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.433133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.433156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.433174 | orchestrator | 2025-09-20 10:00:51.433185 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-20 10:00:51.433196 | orchestrator | Saturday 20 September 2025 09:58:41 +0000 (0:00:01.382) 0:00:08.094 **** 2025-09-20 10:00:51.433231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.433244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.433255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.433266 | orchestrator | 2025-09-20 10:00:51.433277 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-20 10:00:51.433288 | orchestrator | Saturday 20 September 2025 09:58:43 +0000 (0:00:01.453) 0:00:09.548 **** 2025-09-20 10:00:51.433299 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.433309 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.433320 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.433331 | orchestrator | 2025-09-20 10:00:51.433342 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-20 10:00:51.433352 | orchestrator | Saturday 20 September 2025 09:58:43 +0000 (0:00:00.504) 0:00:10.052 **** 2025-09-20 10:00:51.433363 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-20 10:00:51.433374 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-20 10:00:51.433385 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-20 10:00:51.433395 | orchestrator | 2025-09-20 10:00:51.433406 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-20 10:00:51.433416 | orchestrator | Saturday 20 September 2025 09:58:44 +0000 (0:00:01.240) 0:00:11.293 **** 2025-09-20 10:00:51.433427 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-20 10:00:51.433445 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-20 10:00:51.433456 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-20 10:00:51.433466 | orchestrator | 2025-09-20 10:00:51.433477 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-20 10:00:51.433488 | orchestrator | Saturday 20 September 2025 09:58:45 +0000 (0:00:01.111) 0:00:12.404 **** 2025-09-20 10:00:51.433504 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:00:51.433516 | orchestrator | 2025-09-20 10:00:51.433527 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-20 10:00:51.433538 | orchestrator | Saturday 20 September 2025 09:58:46 +0000 (0:00:00.769) 0:00:13.174 **** 2025-09-20 10:00:51.433549 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-20 10:00:51.433565 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-20 10:00:51.433575 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:51.433586 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:00:51.433597 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:00:51.433608 | orchestrator | 2025-09-20 10:00:51.433619 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-20 10:00:51.433630 | orchestrator | Saturday 20 September 2025 09:58:47 +0000 (0:00:00.658) 0:00:13.833 **** 2025-09-20 10:00:51.433641 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.433651 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.433662 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.433673 | orchestrator | 2025-09-20 10:00:51.433684 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-20 10:00:51.433694 | orchestrator | Saturday 20 September 2025 09:58:47 +0000 (0:00:00.511) 0:00:14.344 **** 2025-09-20 10:00:51.433706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1856659, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1074846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1856659, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1074846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1856659, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1074846, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1856709, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1221995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1856709, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1221995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1856709, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1221995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1856669, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1094847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1856669, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1094847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1856669, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1094847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1856712, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1235223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1856712, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1235223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1856712, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1235223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1856681, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1124847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1856681, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1124847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1856681, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1124847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1856694, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.120534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1856694, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.120534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1856694, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.120534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1856658, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.105707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.433990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1856658, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.105707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1856658, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.105707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1856662, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1085136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1856662, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1085136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1856662, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1085136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1856670, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1104307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1856670, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1104307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1856670, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1104307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1856686, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1144845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1856686, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1144845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1856686, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1144845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1856706, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1214848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1856706, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1214848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1856706, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1214848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1856666, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1093848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1856666, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1093848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1856666, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1093848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1856692, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1154847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1856692, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1154847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1856692, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1154847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1856682, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1140263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1856682, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1140263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1856682, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1140263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1856680, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1124847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1856680, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1124847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1856680, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1124847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1856678, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1114845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1856678, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1114845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1856678, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1114845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1856688, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1150079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1856688, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1150079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1856688, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1150079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1856674, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.110968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1856674, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.110968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1856674, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.110968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1856704, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1207972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.434998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1856704, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1207972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1856704, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1207972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1856828, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.170485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1856828, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.170485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1856828, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.170485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1856746, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.143515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1856746, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.143515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1856746, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.143515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1856727, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1274848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1856727, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1274848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1856727, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1274848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1856763, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1474848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1856763, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1474848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1856763, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1474848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1856721, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1240442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1856721, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1240442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1856721, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1240442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1856808, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1624851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1856808, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1624851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1856808, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1624851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1856766, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.155485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1856766, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.155485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1856766, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.155485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1856810, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1624851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1856810, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1624851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1856810, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1624851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1856824, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1692562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1856824, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1692562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1856824, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1692562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1856802, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.161349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1856802, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.161349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1856802, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.161349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1856758, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.145485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1856758, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.145485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1856758, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.145485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1856740, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1364849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1856740, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1364849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1856740, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1364849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1856754, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1444848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1856754, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1444848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1856754, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1444848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1856730, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1331475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1856730, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1331475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1856730, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1331475, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1856760, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1470149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1856760, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1470149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1856760, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1470149, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1856819, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.167485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1856819, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.167485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1856819, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.167485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1856815, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1654851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1856815, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1654851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1856815, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1654851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1856722, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1244848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1856722, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1244848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1856722, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1244848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1856725, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1254847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1856725, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1254847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1856725, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.1254847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1856789, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.156485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1856789, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.156485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.435988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1856789, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.156485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.436001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1856811, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.163485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.436032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1856811, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.163485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.436046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1856811, 'dev': 107, 'nlink': 1, 'atime': 1758359983.0, 'mtime': 1758359983.0, 'ctime': 1758360536.163485, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-20 10:00:51.436059 | orchestrator | 2025-09-20 10:00:51.436071 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-20 10:00:51.436084 | orchestrator | Saturday 20 September 2025 09:59:27 +0000 (0:00:39.331) 0:00:53.675 **** 2025-09-20 10:00:51.436097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.436109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.436123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-20 10:00:51.436141 | orchestrator | 2025-09-20 10:00:51.436154 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-20 10:00:51.436166 | orchestrator | Saturday 20 September 2025 09:59:28 +0000 (0:00:01.087) 0:00:54.763 **** 2025-09-20 10:00:51.436179 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:51.436191 | orchestrator | 2025-09-20 10:00:51.436203 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-20 10:00:51.436238 | orchestrator | Saturday 20 September 2025 09:59:30 +0000 (0:00:02.610) 0:00:57.374 **** 2025-09-20 10:00:51.436250 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:51.436262 | orchestrator | 2025-09-20 10:00:51.436274 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-20 10:00:51.436287 | orchestrator | Saturday 20 September 2025 09:59:33 +0000 (0:00:02.729) 0:01:00.103 **** 2025-09-20 10:00:51.436299 | orchestrator | 2025-09-20 10:00:51.436309 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-20 10:00:51.436326 | orchestrator | Saturday 20 September 2025 09:59:33 +0000 (0:00:00.102) 0:01:00.206 **** 2025-09-20 10:00:51.436337 | orchestrator | 2025-09-20 10:00:51.436348 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-20 10:00:51.436358 | orchestrator | Saturday 20 September 2025 09:59:33 +0000 (0:00:00.116) 0:01:00.322 **** 2025-09-20 10:00:51.436369 | orchestrator | 2025-09-20 10:00:51.436380 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-20 10:00:51.436395 | orchestrator | Saturday 20 September 2025 09:59:34 +0000 (0:00:00.517) 0:01:00.840 **** 2025-09-20 10:00:51.436406 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.436417 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.436427 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:00:51.436438 | orchestrator | 2025-09-20 10:00:51.436449 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-20 10:00:51.436460 | orchestrator | Saturday 20 September 2025 09:59:36 +0000 (0:00:01.816) 0:01:02.657 **** 2025-09-20 10:00:51.436470 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.436481 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.436492 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-20 10:00:51.436504 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-20 10:00:51.436515 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-20 10:00:51.436526 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:51.436537 | orchestrator | 2025-09-20 10:00:51.436547 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-20 10:00:51.436558 | orchestrator | Saturday 20 September 2025 10:00:14 +0000 (0:00:38.577) 0:01:41.235 **** 2025-09-20 10:00:51.436568 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.436579 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:00:51.436590 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:00:51.436601 | orchestrator | 2025-09-20 10:00:51.436612 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-20 10:00:51.436622 | orchestrator | Saturday 20 September 2025 10:00:44 +0000 (0:00:30.089) 0:02:11.324 **** 2025-09-20 10:00:51.436633 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:00:51.436644 | orchestrator | 2025-09-20 10:00:51.436654 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-20 10:00:51.436665 | orchestrator | Saturday 20 September 2025 10:00:47 +0000 (0:00:02.172) 0:02:13.496 **** 2025-09-20 10:00:51.436676 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.436686 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:00:51.436697 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:00:51.436708 | orchestrator | 2025-09-20 10:00:51.436718 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-20 10:00:51.436737 | orchestrator | Saturday 20 September 2025 10:00:47 +0000 (0:00:00.411) 0:02:13.908 **** 2025-09-20 10:00:51.436750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-20 10:00:51.436764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-20 10:00:51.436776 | orchestrator | 2025-09-20 10:00:51.436787 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-20 10:00:51.436797 | orchestrator | Saturday 20 September 2025 10:00:49 +0000 (0:00:02.496) 0:02:16.405 **** 2025-09-20 10:00:51.436808 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:00:51.436819 | orchestrator | 2025-09-20 10:00:51.436830 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:00:51.436841 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 10:00:51.436853 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 10:00:51.436864 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 10:00:51.436875 | orchestrator | 2025-09-20 10:00:51.436885 | orchestrator | 2025-09-20 10:00:51.436896 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:00:51.436907 | orchestrator | Saturday 20 September 2025 10:00:50 +0000 (0:00:00.246) 0:02:16.651 **** 2025-09-20 10:00:51.436918 | orchestrator | =============================================================================== 2025-09-20 10:00:51.436929 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.33s 2025-09-20 10:00:51.436939 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.58s 2025-09-20 10:00:51.436950 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.09s 2025-09-20 10:00:51.436961 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.73s 2025-09-20 10:00:51.436972 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.61s 2025-09-20 10:00:51.436987 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.50s 2025-09-20 10:00:51.436998 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.17s 2025-09-20 10:00:51.437009 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.82s 2025-09-20 10:00:51.437020 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2025-09-20 10:00:51.437035 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.40s 2025-09-20 10:00:51.437046 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.38s 2025-09-20 10:00:51.437057 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2025-09-20 10:00:51.437068 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.11s 2025-09-20 10:00:51.437079 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2025-09-20 10:00:51.437089 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.91s 2025-09-20 10:00:51.437100 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2025-09-20 10:00:51.437111 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.84s 2025-09-20 10:00:51.437127 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2025-09-20 10:00:51.437138 | orchestrator | grafana : Flush handlers ------------------------------------------------ 0.74s 2025-09-20 10:00:51.437149 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.67s 2025-09-20 10:00:51.437159 | orchestrator | 2025-09-20 10:00:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:54.479879 | orchestrator | 2025-09-20 10:00:54 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:54.481617 | orchestrator | 2025-09-20 10:00:54 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:54.481664 | orchestrator | 2025-09-20 10:00:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:00:57.530342 | orchestrator | 2025-09-20 10:00:57 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:00:57.531755 | orchestrator | 2025-09-20 10:00:57 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:00:57.531794 | orchestrator | 2025-09-20 10:00:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:00.597116 | orchestrator | 2025-09-20 10:01:00 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state STARTED 2025-09-20 10:01:00.598185 | orchestrator | 2025-09-20 10:01:00 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:00.598331 | orchestrator | 2025-09-20 10:01:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:03.655092 | orchestrator | 2025-09-20 10:01:03 | INFO  | Task 7f78cd06-e06a-4e62-8f7e-93379903a89b is in state SUCCESS 2025-09-20 10:01:03.656489 | orchestrator | 2025-09-20 10:01:03.656526 | orchestrator | 2025-09-20 10:01:03.656536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:01:03.656546 | orchestrator | 2025-09-20 10:01:03.656555 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-20 10:01:03.656564 | orchestrator | Saturday 20 September 2025 09:51:58 +0000 (0:00:00.295) 0:00:00.295 **** 2025-09-20 10:01:03.656573 | orchestrator | changed: [testbed-manager] 2025-09-20 10:01:03.656583 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.656592 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.656657 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.656667 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.656693 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.656703 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.656713 | orchestrator | 2025-09-20 10:01:03.656722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:01:03.656731 | orchestrator | Saturday 20 September 2025 09:51:58 +0000 (0:00:00.716) 0:00:01.011 **** 2025-09-20 10:01:03.656767 | orchestrator | changed: [testbed-manager] 2025-09-20 10:01:03.656778 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.656787 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.656795 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.656804 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.656813 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.656821 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.656830 | orchestrator | 2025-09-20 10:01:03.656839 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:01:03.656848 | orchestrator | Saturday 20 September 2025 09:51:59 +0000 (0:00:00.663) 0:00:01.675 **** 2025-09-20 10:01:03.656943 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-20 10:01:03.656953 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-20 10:01:03.656962 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-20 10:01:03.656970 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-20 10:01:03.657002 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-20 10:01:03.657011 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-20 10:01:03.657020 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-20 10:01:03.657028 | orchestrator | 2025-09-20 10:01:03.657037 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-20 10:01:03.657048 | orchestrator | 2025-09-20 10:01:03.657058 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-20 10:01:03.657069 | orchestrator | Saturday 20 September 2025 09:52:00 +0000 (0:00:01.054) 0:00:02.729 **** 2025-09-20 10:01:03.657079 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.657089 | orchestrator | 2025-09-20 10:01:03.657099 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-20 10:01:03.657110 | orchestrator | Saturday 20 September 2025 09:52:01 +0000 (0:00:00.717) 0:00:03.447 **** 2025-09-20 10:01:03.657134 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-20 10:01:03.657145 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-20 10:01:03.657155 | orchestrator | 2025-09-20 10:01:03.657165 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-20 10:01:03.657175 | orchestrator | Saturday 20 September 2025 09:52:05 +0000 (0:00:04.027) 0:00:07.474 **** 2025-09-20 10:01:03.657186 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:01:03.657196 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-20 10:01:03.657225 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657235 | orchestrator | 2025-09-20 10:01:03.657246 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-20 10:01:03.657256 | orchestrator | Saturday 20 September 2025 09:52:08 +0000 (0:00:03.593) 0:00:11.068 **** 2025-09-20 10:01:03.657266 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657276 | orchestrator | 2025-09-20 10:01:03.657285 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-20 10:01:03.657295 | orchestrator | Saturday 20 September 2025 09:52:09 +0000 (0:00:00.697) 0:00:11.765 **** 2025-09-20 10:01:03.657305 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657315 | orchestrator | 2025-09-20 10:01:03.657326 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-20 10:01:03.657336 | orchestrator | Saturday 20 September 2025 09:52:11 +0000 (0:00:02.213) 0:00:13.979 **** 2025-09-20 10:01:03.657358 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657368 | orchestrator | 2025-09-20 10:01:03.657379 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 10:01:03.657389 | orchestrator | Saturday 20 September 2025 09:52:15 +0000 (0:00:03.446) 0:00:17.425 **** 2025-09-20 10:01:03.657399 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.657408 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.657416 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.657425 | orchestrator | 2025-09-20 10:01:03.657434 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-20 10:01:03.657442 | orchestrator | Saturday 20 September 2025 09:52:15 +0000 (0:00:00.498) 0:00:17.923 **** 2025-09-20 10:01:03.657451 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.657460 | orchestrator | 2025-09-20 10:01:03.657468 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-20 10:01:03.657515 | orchestrator | Saturday 20 September 2025 09:52:45 +0000 (0:00:29.843) 0:00:47.767 **** 2025-09-20 10:01:03.657524 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657551 | orchestrator | 2025-09-20 10:01:03.657561 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-20 10:01:03.657569 | orchestrator | Saturday 20 September 2025 09:53:00 +0000 (0:00:14.543) 0:01:02.310 **** 2025-09-20 10:01:03.657578 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.657586 | orchestrator | 2025-09-20 10:01:03.657595 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-20 10:01:03.657610 | orchestrator | Saturday 20 September 2025 09:53:10 +0000 (0:00:09.840) 0:01:12.151 **** 2025-09-20 10:01:03.657631 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.657641 | orchestrator | 2025-09-20 10:01:03.657649 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-20 10:01:03.657682 | orchestrator | Saturday 20 September 2025 09:53:11 +0000 (0:00:01.795) 0:01:13.947 **** 2025-09-20 10:01:03.657691 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.657700 | orchestrator | 2025-09-20 10:01:03.657708 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 10:01:03.657717 | orchestrator | Saturday 20 September 2025 09:53:12 +0000 (0:00:00.500) 0:01:14.448 **** 2025-09-20 10:01:03.657726 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.657735 | orchestrator | 2025-09-20 10:01:03.657744 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-20 10:01:03.657752 | orchestrator | Saturday 20 September 2025 09:53:12 +0000 (0:00:00.513) 0:01:14.961 **** 2025-09-20 10:01:03.657761 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.657769 | orchestrator | 2025-09-20 10:01:03.657778 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-20 10:01:03.657787 | orchestrator | Saturday 20 September 2025 09:53:30 +0000 (0:00:18.132) 0:01:33.093 **** 2025-09-20 10:01:03.657795 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.657804 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.657812 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.657821 | orchestrator | 2025-09-20 10:01:03.657829 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-20 10:01:03.657838 | orchestrator | 2025-09-20 10:01:03.657847 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-20 10:01:03.657855 | orchestrator | Saturday 20 September 2025 09:53:31 +0000 (0:00:00.331) 0:01:33.425 **** 2025-09-20 10:01:03.657864 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.657872 | orchestrator | 2025-09-20 10:01:03.657881 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-20 10:01:03.657890 | orchestrator | Saturday 20 September 2025 09:53:31 +0000 (0:00:00.554) 0:01:33.980 **** 2025-09-20 10:01:03.657898 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.657907 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.657915 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657924 | orchestrator | 2025-09-20 10:01:03.657932 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-20 10:01:03.657941 | orchestrator | Saturday 20 September 2025 09:53:33 +0000 (0:00:02.082) 0:01:36.063 **** 2025-09-20 10:01:03.657949 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.657958 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.657967 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.657975 | orchestrator | 2025-09-20 10:01:03.657984 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-20 10:01:03.657992 | orchestrator | Saturday 20 September 2025 09:53:35 +0000 (0:00:02.037) 0:01:38.101 **** 2025-09-20 10:01:03.658006 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.658539 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658557 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658566 | orchestrator | 2025-09-20 10:01:03.658575 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-20 10:01:03.658584 | orchestrator | Saturday 20 September 2025 09:53:36 +0000 (0:00:00.850) 0:01:38.951 **** 2025-09-20 10:01:03.658593 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 10:01:03.658602 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658610 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 10:01:03.658619 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658637 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-20 10:01:03.658646 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-20 10:01:03.658655 | orchestrator | 2025-09-20 10:01:03.658663 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-20 10:01:03.658672 | orchestrator | Saturday 20 September 2025 09:53:46 +0000 (0:00:09.632) 0:01:48.584 **** 2025-09-20 10:01:03.658681 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.658690 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658698 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658707 | orchestrator | 2025-09-20 10:01:03.658716 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-20 10:01:03.658725 | orchestrator | Saturday 20 September 2025 09:53:47 +0000 (0:00:00.672) 0:01:49.257 **** 2025-09-20 10:01:03.658734 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-20 10:01:03.658742 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.658751 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-20 10:01:03.658760 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658769 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-20 10:01:03.658778 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658787 | orchestrator | 2025-09-20 10:01:03.658795 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-20 10:01:03.658804 | orchestrator | Saturday 20 September 2025 09:53:49 +0000 (0:00:01.906) 0:01:51.164 **** 2025-09-20 10:01:03.658813 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658822 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658830 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.658839 | orchestrator | 2025-09-20 10:01:03.658848 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-20 10:01:03.658857 | orchestrator | Saturday 20 September 2025 09:53:49 +0000 (0:00:00.726) 0:01:51.891 **** 2025-09-20 10:01:03.658865 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658874 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658883 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.658891 | orchestrator | 2025-09-20 10:01:03.658900 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-20 10:01:03.658909 | orchestrator | Saturday 20 September 2025 09:53:50 +0000 (0:00:01.191) 0:01:53.083 **** 2025-09-20 10:01:03.658918 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658927 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.658948 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.658957 | orchestrator | 2025-09-20 10:01:03.658966 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-20 10:01:03.658975 | orchestrator | Saturday 20 September 2025 09:53:55 +0000 (0:00:04.072) 0:01:57.156 **** 2025-09-20 10:01:03.658983 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.658992 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.659001 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.659010 | orchestrator | 2025-09-20 10:01:03.659018 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-20 10:01:03.659027 | orchestrator | Saturday 20 September 2025 09:54:18 +0000 (0:00:23.130) 0:02:20.286 **** 2025-09-20 10:01:03.659036 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.659045 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.659054 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.659062 | orchestrator | 2025-09-20 10:01:03.659071 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-20 10:01:03.659080 | orchestrator | Saturday 20 September 2025 09:54:31 +0000 (0:00:12.933) 0:02:33.219 **** 2025-09-20 10:01:03.659088 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.659097 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.659106 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.659114 | orchestrator | 2025-09-20 10:01:03.659123 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-20 10:01:03.659138 | orchestrator | Saturday 20 September 2025 09:54:32 +0000 (0:00:01.066) 0:02:34.286 **** 2025-09-20 10:01:03.659146 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.659155 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.659166 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.659176 | orchestrator | 2025-09-20 10:01:03.659186 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-20 10:01:03.659197 | orchestrator | Saturday 20 September 2025 09:54:43 +0000 (0:00:11.722) 0:02:46.008 **** 2025-09-20 10:01:03.659233 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.659244 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.659254 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.659264 | orchestrator | 2025-09-20 10:01:03.659274 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-20 10:01:03.659284 | orchestrator | Saturday 20 September 2025 09:54:45 +0000 (0:00:01.152) 0:02:47.160 **** 2025-09-20 10:01:03.659294 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.659304 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.659313 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.659323 | orchestrator | 2025-09-20 10:01:03.659333 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-20 10:01:03.659344 | orchestrator | 2025-09-20 10:01:03.659355 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 10:01:03.659365 | orchestrator | Saturday 20 September 2025 09:54:45 +0000 (0:00:00.514) 0:02:47.675 **** 2025-09-20 10:01:03.659375 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.659386 | orchestrator | 2025-09-20 10:01:03.659402 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-20 10:01:03.659413 | orchestrator | Saturday 20 September 2025 09:54:46 +0000 (0:00:00.619) 0:02:48.294 **** 2025-09-20 10:01:03.659423 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-20 10:01:03.659433 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-20 10:01:03.659443 | orchestrator | 2025-09-20 10:01:03.659453 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-20 10:01:03.659463 | orchestrator | Saturday 20 September 2025 09:54:49 +0000 (0:00:03.217) 0:02:51.512 **** 2025-09-20 10:01:03.659473 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-20 10:01:03.659527 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-20 10:01:03.659539 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-20 10:01:03.659549 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-20 10:01:03.659558 | orchestrator | 2025-09-20 10:01:03.659567 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-20 10:01:03.659575 | orchestrator | Saturday 20 September 2025 09:54:56 +0000 (0:00:06.796) 0:02:58.309 **** 2025-09-20 10:01:03.659584 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 10:01:03.659593 | orchestrator | 2025-09-20 10:01:03.659601 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-20 10:01:03.659631 | orchestrator | Saturday 20 September 2025 09:54:59 +0000 (0:00:03.654) 0:03:01.964 **** 2025-09-20 10:01:03.659640 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 10:01:03.659649 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-20 10:01:03.659657 | orchestrator | 2025-09-20 10:01:03.659666 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-20 10:01:03.659675 | orchestrator | Saturday 20 September 2025 09:55:04 +0000 (0:00:04.360) 0:03:06.324 **** 2025-09-20 10:01:03.659691 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 10:01:03.659700 | orchestrator | 2025-09-20 10:01:03.659708 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-20 10:01:03.659717 | orchestrator | Saturday 20 September 2025 09:55:07 +0000 (0:00:03.732) 0:03:10.056 **** 2025-09-20 10:01:03.659805 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-20 10:01:03.659836 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-20 10:01:03.659846 | orchestrator | 2025-09-20 10:01:03.659855 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-20 10:01:03.659871 | orchestrator | Saturday 20 September 2025 09:55:15 +0000 (0:00:07.728) 0:03:17.785 **** 2025-09-20 10:01:03.659886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.659905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.659917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.659941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.659953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.659963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.659972 | orchestrator | 2025-09-20 10:01:03.659981 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-20 10:01:03.659990 | orchestrator | Saturday 20 September 2025 09:55:17 +0000 (0:00:01.856) 0:03:19.641 **** 2025-09-20 10:01:03.659999 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.660007 | orchestrator | 2025-09-20 10:01:03.660016 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-20 10:01:03.660025 | orchestrator | Saturday 20 September 2025 09:55:17 +0000 (0:00:00.269) 0:03:19.911 **** 2025-09-20 10:01:03.660033 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.660042 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.660051 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.660059 | orchestrator | 2025-09-20 10:01:03.660068 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-20 10:01:03.660077 | orchestrator | Saturday 20 September 2025 09:55:18 +0000 (0:00:00.753) 0:03:20.664 **** 2025-09-20 10:01:03.660090 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-20 10:01:03.660099 | orchestrator | 2025-09-20 10:01:03.660107 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-20 10:01:03.660116 | orchestrator | Saturday 20 September 2025 09:55:19 +0000 (0:00:00.993) 0:03:21.657 **** 2025-09-20 10:01:03.660125 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.660133 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.660142 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.660151 | orchestrator | 2025-09-20 10:01:03.660160 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-20 10:01:03.660168 | orchestrator | Saturday 20 September 2025 09:55:19 +0000 (0:00:00.414) 0:03:22.072 **** 2025-09-20 10:01:03.660184 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.660193 | orchestrator | 2025-09-20 10:01:03.660202 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-20 10:01:03.660226 | orchestrator | Saturday 20 September 2025 09:55:20 +0000 (0:00:00.580) 0:03:22.653 **** 2025-09-20 10:01:03.660236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660323 | orchestrator | 2025-09-20 10:01:03.660332 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-20 10:01:03.660341 | orchestrator | Saturday 20 September 2025 09:55:23 +0000 (0:00:02.781) 0:03:25.435 **** 2025-09-20 10:01:03.660350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660378 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.660394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660413 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.660429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660449 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.660458 | orchestrator | 2025-09-20 10:01:03.660467 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-20 10:01:03.660475 | orchestrator | Saturday 20 September 2025 09:55:24 +0000 (0:00:01.197) 0:03:26.632 **** 2025-09-20 10:01:03.660494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660514 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.660530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660550 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.660649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660685 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.660694 | orchestrator | 2025-09-20 10:01:03.660703 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-20 10:01:03.660711 | orchestrator | Saturday 20 September 2025 09:55:25 +0000 (0:00:01.237) 0:03:27.869 **** 2025-09-20 10:01:03.660728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660804 | orchestrator | 2025-09-20 10:01:03.660813 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-20 10:01:03.660822 | orchestrator | Saturday 20 September 2025 09:55:28 +0000 (0:00:03.116) 0:03:30.986 **** 2025-09-20 10:01:03.660835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.660877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.660914 | orchestrator | 2025-09-20 10:01:03.660923 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-20 10:01:03.660931 | orchestrator | Saturday 20 September 2025 09:55:38 +0000 (0:00:09.161) 0:03:40.148 **** 2025-09-20 10:01:03.660940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660964 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.660973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.660988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.660997 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.661010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-20 10:01:03.661020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.661029 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.661038 | orchestrator | 2025-09-20 10:01:03.661046 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-20 10:01:03.661055 | orchestrator | Saturday 20 September 2025 09:55:39 +0000 (0:00:01.422) 0:03:41.570 **** 2025-09-20 10:01:03.661063 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.661072 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.661080 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.661089 | orchestrator | 2025-09-20 10:01:03.661102 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-20 10:01:03.661111 | orchestrator | Saturday 20 September 2025 09:55:42 +0000 (0:00:02.991) 0:03:44.562 **** 2025-09-20 10:01:03.661120 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.661128 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.661137 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.661146 | orchestrator | 2025-09-20 10:01:03.661160 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-20 10:01:03.661169 | orchestrator | Saturday 20 September 2025 09:55:43 +0000 (0:00:00.620) 0:03:45.183 **** 2025-09-20 10:01:03.661178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.661191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.661201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.661235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-20 10:01:03.661253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.661262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.661271 | orchestrator | 2025-09-20 10:01:03.661280 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-20 10:01:03.661289 | orchestrator | Saturday 20 September 2025 09:55:46 +0000 (0:00:03.379) 0:03:48.562 **** 2025-09-20 10:01:03.661298 | orchestrator | 2025-09-20 10:01:03.661307 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-20 10:01:03.661319 | orchestrator | Saturday 20 September 2025 09:55:46 +0000 (0:00:00.136) 0:03:48.699 **** 2025-09-20 10:01:03.661328 | orchestrator | 2025-09-20 10:01:03.661337 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-20 10:01:03.661346 | orchestrator | Saturday 20 September 2025 09:55:46 +0000 (0:00:00.207) 0:03:48.906 **** 2025-09-20 10:01:03.661354 | orchestrator | 2025-09-20 10:01:03.661363 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-20 10:01:03.661372 | orchestrator | Saturday 20 September 2025 09:55:47 +0000 (0:00:00.242) 0:03:49.149 **** 2025-09-20 10:01:03.661380 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.661389 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.661398 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.661406 | orchestrator | 2025-09-20 10:01:03.661415 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-20 10:01:03.661424 | orchestrator | Saturday 20 September 2025 09:56:14 +0000 (0:00:27.464) 0:04:16.614 **** 2025-09-20 10:01:03.661432 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.661441 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.661449 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.661458 | orchestrator | 2025-09-20 10:01:03.661467 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-20 10:01:03.661475 | orchestrator | 2025-09-20 10:01:03.661484 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 10:01:03.661493 | orchestrator | Saturday 20 September 2025 09:56:22 +0000 (0:00:07.634) 0:04:24.248 **** 2025-09-20 10:01:03.661502 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.661510 | orchestrator | 2025-09-20 10:01:03.661519 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 10:01:03.661536 | orchestrator | Saturday 20 September 2025 09:56:24 +0000 (0:00:02.199) 0:04:26.448 **** 2025-09-20 10:01:03.661545 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.661554 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.661562 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.661571 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.661580 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.661588 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.661597 | orchestrator | 2025-09-20 10:01:03.661606 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-20 10:01:03.661614 | orchestrator | Saturday 20 September 2025 09:56:24 +0000 (0:00:00.654) 0:04:27.102 **** 2025-09-20 10:01:03.661623 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.661632 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.661640 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.661649 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:01:03.661657 | orchestrator | 2025-09-20 10:01:03.661666 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-20 10:01:03.661679 | orchestrator | Saturday 20 September 2025 09:56:26 +0000 (0:00:01.919) 0:04:29.022 **** 2025-09-20 10:01:03.661688 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-20 10:01:03.661697 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-20 10:01:03.661706 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-20 10:01:03.661715 | orchestrator | 2025-09-20 10:01:03.661723 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-20 10:01:03.661732 | orchestrator | Saturday 20 September 2025 09:56:27 +0000 (0:00:00.981) 0:04:30.003 **** 2025-09-20 10:01:03.661741 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-20 10:01:03.661749 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-20 10:01:03.661758 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-20 10:01:03.661766 | orchestrator | 2025-09-20 10:01:03.661775 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-20 10:01:03.661784 | orchestrator | Saturday 20 September 2025 09:56:29 +0000 (0:00:01.545) 0:04:31.549 **** 2025-09-20 10:01:03.661792 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-20 10:01:03.661801 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.661809 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-20 10:01:03.661818 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.661827 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-20 10:01:03.661835 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.661844 | orchestrator | 2025-09-20 10:01:03.661852 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-20 10:01:03.661861 | orchestrator | Saturday 20 September 2025 09:56:30 +0000 (0:00:00.887) 0:04:32.437 **** 2025-09-20 10:01:03.661870 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-20 10:01:03.661878 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:01:03.661887 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:01:03.661895 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.661904 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-20 10:01:03.661913 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:01:03.661921 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:01:03.661930 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.661939 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-20 10:01:03.661948 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-20 10:01:03.661962 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-20 10:01:03.661971 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.661984 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-20 10:01:03.661993 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-20 10:01:03.662002 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-20 10:01:03.662010 | orchestrator | 2025-09-20 10:01:03.662393 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-20 10:01:03.662404 | orchestrator | Saturday 20 September 2025 09:56:32 +0000 (0:00:02.361) 0:04:34.798 **** 2025-09-20 10:01:03.662413 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.662422 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.662431 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.662439 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.662448 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.662457 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.662465 | orchestrator | 2025-09-20 10:01:03.662474 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-20 10:01:03.662483 | orchestrator | Saturday 20 September 2025 09:56:34 +0000 (0:00:01.595) 0:04:36.394 **** 2025-09-20 10:01:03.662492 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.662500 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.662509 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.662518 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.662526 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.662535 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.662543 | orchestrator | 2025-09-20 10:01:03.662552 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-20 10:01:03.662561 | orchestrator | Saturday 20 September 2025 09:56:37 +0000 (0:00:02.909) 0:04:39.303 **** 2025-09-20 10:01:03.662571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662759 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662827 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662859 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662885 | orchestrator | 2025-09-20 10:01:03.662894 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 10:01:03.662903 | orchestrator | Saturday 20 September 2025 09:56:40 +0000 (0:00:03.703) 0:04:43.007 **** 2025-09-20 10:01:03.662912 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:01:03.662920 | orchestrator | 2025-09-20 10:01:03.662929 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-20 10:01:03.662938 | orchestrator | Saturday 20 September 2025 09:56:43 +0000 (0:00:02.197) 0:04:45.204 **** 2025-09-20 10:01:03.662951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.662991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663027 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663139 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.663157 | orchestrator | 2025-09-20 10:01:03.663166 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-20 10:01:03.663175 | orchestrator | Saturday 20 September 2025 09:56:47 +0000 (0:00:04.558) 0:04:49.762 **** 2025-09-20 10:01:03.663223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.663242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.663255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.663274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.663305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663325 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.663334 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.663343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.663352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.663365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663375 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.663384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.663393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663408 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.663440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.663451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663460 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.663469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.663483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663492 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.663501 | orchestrator | 2025-09-20 10:01:03.663509 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-20 10:01:03.663518 | orchestrator | Saturday 20 September 2025 09:56:50 +0000 (0:00:03.073) 0:04:52.836 **** 2025-09-20 10:01:03.663527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.663537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.663575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663585 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.663595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.663604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.663617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663626 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.663635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.663671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.663682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663691 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.663700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.663713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663722 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.663731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.663740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663755 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.663764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.663794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.663804 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.663813 | orchestrator | 2025-09-20 10:01:03.663822 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 10:01:03.663831 | orchestrator | Saturday 20 September 2025 09:56:52 +0000 (0:00:01.970) 0:04:54.806 **** 2025-09-20 10:01:03.663840 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.663849 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.663857 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.663866 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-20 10:01:03.663874 | orchestrator | 2025-09-20 10:01:03.663883 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-20 10:01:03.663892 | orchestrator | Saturday 20 September 2025 09:56:53 +0000 (0:00:00.928) 0:04:55.734 **** 2025-09-20 10:01:03.663900 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 10:01:03.663909 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 10:01:03.663917 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 10:01:03.663926 | orchestrator | 2025-09-20 10:01:03.663935 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-20 10:01:03.663943 | orchestrator | Saturday 20 September 2025 09:56:54 +0000 (0:00:00.914) 0:04:56.648 **** 2025-09-20 10:01:03.663952 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 10:01:03.663961 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-20 10:01:03.663969 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-20 10:01:03.663978 | orchestrator | 2025-09-20 10:01:03.663987 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-20 10:01:03.663995 | orchestrator | Saturday 20 September 2025 09:56:55 +0000 (0:00:00.849) 0:04:57.498 **** 2025-09-20 10:01:03.664004 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:01:03.664013 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:01:03.664022 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:01:03.664030 | orchestrator | 2025-09-20 10:01:03.664039 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-20 10:01:03.664047 | orchestrator | Saturday 20 September 2025 09:56:55 +0000 (0:00:00.498) 0:04:57.997 **** 2025-09-20 10:01:03.664056 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:01:03.664070 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:01:03.664078 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:01:03.664087 | orchestrator | 2025-09-20 10:01:03.664100 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-20 10:01:03.664108 | orchestrator | Saturday 20 September 2025 09:56:56 +0000 (0:00:00.670) 0:04:58.667 **** 2025-09-20 10:01:03.664117 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-20 10:01:03.664126 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-20 10:01:03.664135 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-20 10:01:03.664143 | orchestrator | 2025-09-20 10:01:03.664152 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-20 10:01:03.664160 | orchestrator | Saturday 20 September 2025 09:56:57 +0000 (0:00:01.208) 0:04:59.876 **** 2025-09-20 10:01:03.664169 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-20 10:01:03.664178 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-20 10:01:03.664187 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-20 10:01:03.664195 | orchestrator | 2025-09-20 10:01:03.664217 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-20 10:01:03.664226 | orchestrator | Saturday 20 September 2025 09:56:58 +0000 (0:00:01.205) 0:05:01.081 **** 2025-09-20 10:01:03.664235 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-20 10:01:03.664243 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-20 10:01:03.664252 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-20 10:01:03.664260 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-20 10:01:03.664269 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-20 10:01:03.664277 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-20 10:01:03.664286 | orchestrator | 2025-09-20 10:01:03.664295 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-20 10:01:03.664303 | orchestrator | Saturday 20 September 2025 09:57:02 +0000 (0:00:03.695) 0:05:04.777 **** 2025-09-20 10:01:03.664312 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.664321 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.664329 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.664338 | orchestrator | 2025-09-20 10:01:03.664346 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-20 10:01:03.664355 | orchestrator | Saturday 20 September 2025 09:57:03 +0000 (0:00:00.426) 0:05:05.203 **** 2025-09-20 10:01:03.664364 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.664372 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.664381 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.664389 | orchestrator | 2025-09-20 10:01:03.664398 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-20 10:01:03.664407 | orchestrator | Saturday 20 September 2025 09:57:03 +0000 (0:00:00.318) 0:05:05.521 **** 2025-09-20 10:01:03.664415 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.664424 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.664433 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.664441 | orchestrator | 2025-09-20 10:01:03.664474 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-20 10:01:03.664484 | orchestrator | Saturday 20 September 2025 09:57:04 +0000 (0:00:01.188) 0:05:06.709 **** 2025-09-20 10:01:03.664493 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-20 10:01:03.664503 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-20 10:01:03.664512 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-20 10:01:03.664520 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-20 10:01:03.664533 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-20 10:01:03.664542 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-20 10:01:03.664550 | orchestrator | 2025-09-20 10:01:03.664559 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-20 10:01:03.664568 | orchestrator | Saturday 20 September 2025 09:57:08 +0000 (0:00:03.728) 0:05:10.438 **** 2025-09-20 10:01:03.664576 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:01:03.664585 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:01:03.664594 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:01:03.664602 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-20 10:01:03.664611 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.664620 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-20 10:01:03.664628 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.664637 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-20 10:01:03.664645 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.664654 | orchestrator | 2025-09-20 10:01:03.664662 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-20 10:01:03.664671 | orchestrator | Saturday 20 September 2025 09:57:11 +0000 (0:00:03.444) 0:05:13.882 **** 2025-09-20 10:01:03.664680 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.664688 | orchestrator | 2025-09-20 10:01:03.664697 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-20 10:01:03.664705 | orchestrator | Saturday 20 September 2025 09:57:11 +0000 (0:00:00.131) 0:05:14.014 **** 2025-09-20 10:01:03.664714 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.664723 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.664735 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.664744 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.664753 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.664761 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.664770 | orchestrator | 2025-09-20 10:01:03.664778 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-20 10:01:03.664787 | orchestrator | Saturday 20 September 2025 09:57:12 +0000 (0:00:00.512) 0:05:14.526 **** 2025-09-20 10:01:03.664796 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-20 10:01:03.664804 | orchestrator | 2025-09-20 10:01:03.664813 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-20 10:01:03.664821 | orchestrator | Saturday 20 September 2025 09:57:13 +0000 (0:00:00.687) 0:05:15.213 **** 2025-09-20 10:01:03.664830 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.664839 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.664847 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.664856 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.664864 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.664873 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.664881 | orchestrator | 2025-09-20 10:01:03.664890 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-20 10:01:03.664898 | orchestrator | Saturday 20 September 2025 09:57:13 +0000 (0:00:00.661) 0:05:15.874 **** 2025-09-20 10:01:03.664908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.664930 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.664941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.664954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.664964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.664973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.664990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665017 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665099 | orchestrator | 2025-09-20 10:01:03.665108 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-20 10:01:03.665116 | orchestrator | Saturday 20 September 2025 09:57:17 +0000 (0:00:03.871) 0:05:19.746 **** 2025-09-20 10:01:03.665126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.665139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.665148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.665163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.665178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.665188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.665202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.665354 | orchestrator | 2025-09-20 10:01:03.665362 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-20 10:01:03.665370 | orchestrator | Saturday 20 September 2025 09:57:26 +0000 (0:00:08.451) 0:05:28.198 **** 2025-09-20 10:01:03.665378 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.665386 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.665393 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.665401 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.665409 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.665417 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.665425 | orchestrator | 2025-09-20 10:01:03.665432 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-20 10:01:03.665440 | orchestrator | Saturday 20 September 2025 09:57:27 +0000 (0:00:01.675) 0:05:29.873 **** 2025-09-20 10:01:03.665448 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-20 10:01:03.665456 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-20 10:01:03.665464 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-20 10:01:03.665472 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-20 10:01:03.665484 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-20 10:01:03.665492 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-20 10:01:03.665500 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-20 10:01:03.665508 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-20 10:01:03.665516 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.665523 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.665531 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-20 10:01:03.665539 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.665547 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-20 10:01:03.665555 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-20 10:01:03.665562 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-20 10:01:03.665570 | orchestrator | 2025-09-20 10:01:03.665578 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-20 10:01:03.665586 | orchestrator | Saturday 20 September 2025 09:57:33 +0000 (0:00:05.725) 0:05:35.599 **** 2025-09-20 10:01:03.665594 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.665601 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.665609 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.665617 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.665625 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.665633 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.665640 | orchestrator | 2025-09-20 10:01:03.665648 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-20 10:01:03.665662 | orchestrator | Saturday 20 September 2025 09:57:34 +0000 (0:00:00.530) 0:05:36.129 **** 2025-09-20 10:01:03.665670 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-20 10:01:03.665678 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-20 10:01:03.665686 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-20 10:01:03.665694 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-20 10:01:03.665706 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-20 10:01:03.665714 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-20 10:01:03.665722 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-20 10:01:03.665729 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-20 10:01:03.665737 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-20 10:01:03.665745 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-20 10:01:03.665753 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.665761 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-20 10:01:03.665768 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.665776 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-20 10:01:03.665784 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.665792 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-20 10:01:03.665800 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-20 10:01:03.665808 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-20 10:01:03.665815 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-20 10:01:03.665823 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-20 10:01:03.665831 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-20 10:01:03.665839 | orchestrator | 2025-09-20 10:01:03.665846 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-20 10:01:03.665854 | orchestrator | Saturday 20 September 2025 09:57:40 +0000 (0:00:06.484) 0:05:42.614 **** 2025-09-20 10:01:03.665862 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 10:01:03.665870 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 10:01:03.665882 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-20 10:01:03.665890 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 10:01:03.665898 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 10:01:03.665906 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-20 10:01:03.665913 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-20 10:01:03.665927 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-20 10:01:03.665935 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-20 10:01:03.665943 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 10:01:03.665951 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 10:01:03.665959 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-20 10:01:03.665967 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 10:01:03.665974 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-20 10:01:03.665982 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.665990 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-20 10:01:03.665998 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.666005 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 10:01:03.666013 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-20 10:01:03.666048 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-20 10:01:03.666056 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666064 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 10:01:03.666072 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 10:01:03.666080 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-20 10:01:03.666087 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 10:01:03.666095 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 10:01:03.666107 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-20 10:01:03.666115 | orchestrator | 2025-09-20 10:01:03.666123 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-20 10:01:03.666131 | orchestrator | Saturday 20 September 2025 09:57:48 +0000 (0:00:07.918) 0:05:50.532 **** 2025-09-20 10:01:03.666138 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.666146 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.666154 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.666162 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666169 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.666177 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.666185 | orchestrator | 2025-09-20 10:01:03.666193 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-20 10:01:03.666201 | orchestrator | Saturday 20 September 2025 09:57:49 +0000 (0:00:00.958) 0:05:51.490 **** 2025-09-20 10:01:03.666225 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.666233 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.666241 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.666249 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666256 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.666264 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.666272 | orchestrator | 2025-09-20 10:01:03.666280 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-20 10:01:03.666288 | orchestrator | Saturday 20 September 2025 09:57:49 +0000 (0:00:00.563) 0:05:52.055 **** 2025-09-20 10:01:03.666296 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.666303 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.666311 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666319 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.666332 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.666340 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.666348 | orchestrator | 2025-09-20 10:01:03.666356 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-20 10:01:03.666364 | orchestrator | Saturday 20 September 2025 09:57:52 +0000 (0:00:02.763) 0:05:54.818 **** 2025-09-20 10:01:03.666377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.666386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.666395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.666403 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.666415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.666424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.666441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.666449 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.666461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-20 10:01:03.666470 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-20 10:01:03.666484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.666492 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.666500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.666513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.666522 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.666542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.666551 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.666559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-20 10:01:03.666567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-20 10:01:03.666575 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.666583 | orchestrator | 2025-09-20 10:01:03.666591 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-20 10:01:03.666599 | orchestrator | Saturday 20 September 2025 09:57:54 +0000 (0:00:01.602) 0:05:56.420 **** 2025-09-20 10:01:03.666607 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-20 10:01:03.666615 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-20 10:01:03.666623 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.666634 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-20 10:01:03.666642 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-20 10:01:03.666650 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.666658 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-20 10:01:03.666671 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-20 10:01:03.666679 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.666687 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-20 10:01:03.666695 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-20 10:01:03.666702 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666710 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-20 10:01:03.666718 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-20 10:01:03.666726 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.666734 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-20 10:01:03.666741 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-20 10:01:03.666749 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.666757 | orchestrator | 2025-09-20 10:01:03.666765 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-20 10:01:03.666773 | orchestrator | Saturday 20 September 2025 09:57:55 +0000 (0:00:00.930) 0:05:57.350 **** 2025-09-20 10:01:03.666781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-20 10:01:03.666937 | orchestrator | 2025-09-20 10:01:03.666945 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-20 10:01:03.666953 | orchestrator | Saturday 20 September 2025 09:57:58 +0000 (0:00:02.850) 0:06:00.201 **** 2025-09-20 10:01:03.666961 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.666969 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.666982 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.666990 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.666997 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.667005 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.667013 | orchestrator | 2025-09-20 10:01:03.667021 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 10:01:03.667028 | orchestrator | Saturday 20 September 2025 09:57:58 +0000 (0:00:00.847) 0:06:01.049 **** 2025-09-20 10:01:03.667036 | orchestrator | 2025-09-20 10:01:03.667044 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 10:01:03.667052 | orchestrator | Saturday 20 September 2025 09:57:59 +0000 (0:00:00.143) 0:06:01.192 **** 2025-09-20 10:01:03.667059 | orchestrator | 2025-09-20 10:01:03.667067 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 10:01:03.667075 | orchestrator | Saturday 20 September 2025 09:57:59 +0000 (0:00:00.138) 0:06:01.331 **** 2025-09-20 10:01:03.667083 | orchestrator | 2025-09-20 10:01:03.667091 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 10:01:03.667098 | orchestrator | Saturday 20 September 2025 09:57:59 +0000 (0:00:00.199) 0:06:01.530 **** 2025-09-20 10:01:03.667106 | orchestrator | 2025-09-20 10:01:03.667117 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 10:01:03.667125 | orchestrator | Saturday 20 September 2025 09:57:59 +0000 (0:00:00.141) 0:06:01.671 **** 2025-09-20 10:01:03.667133 | orchestrator | 2025-09-20 10:01:03.667141 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-20 10:01:03.667149 | orchestrator | Saturday 20 September 2025 09:57:59 +0000 (0:00:00.202) 0:06:01.874 **** 2025-09-20 10:01:03.667156 | orchestrator | 2025-09-20 10:01:03.667164 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-20 10:01:03.667172 | orchestrator | Saturday 20 September 2025 09:58:00 +0000 (0:00:00.363) 0:06:02.237 **** 2025-09-20 10:01:03.667180 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.667187 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.667195 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.667203 | orchestrator | 2025-09-20 10:01:03.667262 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-20 10:01:03.667270 | orchestrator | Saturday 20 September 2025 09:58:09 +0000 (0:00:09.100) 0:06:11.337 **** 2025-09-20 10:01:03.667278 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.667286 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.667293 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.667301 | orchestrator | 2025-09-20 10:01:03.667309 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-20 10:01:03.667317 | orchestrator | Saturday 20 September 2025 09:58:25 +0000 (0:00:16.129) 0:06:27.467 **** 2025-09-20 10:01:03.667324 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.667332 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.667340 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.667348 | orchestrator | 2025-09-20 10:01:03.667355 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-20 10:01:03.667363 | orchestrator | Saturday 20 September 2025 09:58:49 +0000 (0:00:23.821) 0:06:51.289 **** 2025-09-20 10:01:03.667371 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.667379 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.667386 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.667394 | orchestrator | 2025-09-20 10:01:03.667402 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-20 10:01:03.667409 | orchestrator | Saturday 20 September 2025 09:59:26 +0000 (0:00:36.986) 0:07:28.275 **** 2025-09-20 10:01:03.667417 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.667425 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.667433 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.667440 | orchestrator | 2025-09-20 10:01:03.667448 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-20 10:01:03.667461 | orchestrator | Saturday 20 September 2025 09:59:26 +0000 (0:00:00.811) 0:07:29.087 **** 2025-09-20 10:01:03.667469 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.667477 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.667485 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.667493 | orchestrator | 2025-09-20 10:01:03.667500 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-20 10:01:03.667512 | orchestrator | Saturday 20 September 2025 09:59:27 +0000 (0:00:00.774) 0:07:29.861 **** 2025-09-20 10:01:03.667521 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:01:03.667529 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:01:03.667536 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:01:03.667544 | orchestrator | 2025-09-20 10:01:03.667552 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-20 10:01:03.667560 | orchestrator | Saturday 20 September 2025 09:59:52 +0000 (0:00:25.026) 0:07:54.888 **** 2025-09-20 10:01:03.667568 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.667576 | orchestrator | 2025-09-20 10:01:03.667583 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-20 10:01:03.667591 | orchestrator | Saturday 20 September 2025 09:59:52 +0000 (0:00:00.143) 0:07:55.031 **** 2025-09-20 10:01:03.667599 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.667607 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.667615 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.667623 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.667630 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.667638 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-20 10:01:03.667646 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:01:03.667654 | orchestrator | 2025-09-20 10:01:03.667662 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-20 10:01:03.667670 | orchestrator | Saturday 20 September 2025 10:00:15 +0000 (0:00:22.870) 0:08:17.902 **** 2025-09-20 10:01:03.667677 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.667685 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.667693 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.667701 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.667709 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.667716 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.667724 | orchestrator | 2025-09-20 10:01:03.667732 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-20 10:01:03.667740 | orchestrator | Saturday 20 September 2025 10:00:24 +0000 (0:00:09.209) 0:08:27.112 **** 2025-09-20 10:01:03.667748 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.667756 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.667763 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.667771 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.667779 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.667787 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-09-20 10:01:03.667795 | orchestrator | 2025-09-20 10:01:03.667802 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-20 10:01:03.667810 | orchestrator | Saturday 20 September 2025 10:00:28 +0000 (0:00:03.555) 0:08:30.667 **** 2025-09-20 10:01:03.667818 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:01:03.667826 | orchestrator | 2025-09-20 10:01:03.667838 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-20 10:01:03.667846 | orchestrator | Saturday 20 September 2025 10:00:40 +0000 (0:00:12.356) 0:08:43.023 **** 2025-09-20 10:01:03.667854 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:01:03.667861 | orchestrator | 2025-09-20 10:01:03.667875 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-20 10:01:03.667883 | orchestrator | Saturday 20 September 2025 10:00:42 +0000 (0:00:01.226) 0:08:44.251 **** 2025-09-20 10:01:03.667891 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.667899 | orchestrator | 2025-09-20 10:01:03.667907 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-20 10:01:03.667915 | orchestrator | Saturday 20 September 2025 10:00:43 +0000 (0:00:01.165) 0:08:45.416 **** 2025-09-20 10:01:03.667922 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:01:03.667930 | orchestrator | 2025-09-20 10:01:03.667938 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-20 10:01:03.667946 | orchestrator | Saturday 20 September 2025 10:00:54 +0000 (0:00:11.080) 0:08:56.496 **** 2025-09-20 10:01:03.667953 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:01:03.667961 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:01:03.667969 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:01:03.667977 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:01:03.667985 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:01:03.667993 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:01:03.668000 | orchestrator | 2025-09-20 10:01:03.668008 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-20 10:01:03.668016 | orchestrator | 2025-09-20 10:01:03.668024 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-20 10:01:03.668032 | orchestrator | Saturday 20 September 2025 10:00:56 +0000 (0:00:01.803) 0:08:58.300 **** 2025-09-20 10:01:03.668040 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:01:03.668048 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:01:03.668056 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:01:03.668064 | orchestrator | 2025-09-20 10:01:03.668071 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-20 10:01:03.668079 | orchestrator | 2025-09-20 10:01:03.668087 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-20 10:01:03.668095 | orchestrator | Saturday 20 September 2025 10:00:57 +0000 (0:00:01.115) 0:08:59.415 **** 2025-09-20 10:01:03.668103 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.668111 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.668118 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.668126 | orchestrator | 2025-09-20 10:01:03.668134 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-20 10:01:03.668142 | orchestrator | 2025-09-20 10:01:03.668150 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-20 10:01:03.668158 | orchestrator | Saturday 20 September 2025 10:00:57 +0000 (0:00:00.557) 0:08:59.972 **** 2025-09-20 10:01:03.668165 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-20 10:01:03.668177 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-20 10:01:03.668185 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-20 10:01:03.668193 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-20 10:01:03.668201 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-20 10:01:03.668244 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-20 10:01:03.668253 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:01:03.668261 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-20 10:01:03.668268 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-20 10:01:03.668276 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-20 10:01:03.668284 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-20 10:01:03.668292 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-20 10:01:03.668300 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-20 10:01:03.668308 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:01:03.668321 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-20 10:01:03.668329 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-20 10:01:03.668337 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-20 10:01:03.668345 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-20 10:01:03.668352 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-20 10:01:03.668360 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-20 10:01:03.668368 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:01:03.668376 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-20 10:01:03.668383 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-20 10:01:03.668390 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-20 10:01:03.668397 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-20 10:01:03.668403 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-20 10:01:03.668410 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-20 10:01:03.668416 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.668423 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-20 10:01:03.668430 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-20 10:01:03.668436 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-20 10:01:03.668443 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-20 10:01:03.668450 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-20 10:01:03.668456 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-20 10:01:03.668469 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.668475 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-20 10:01:03.668482 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-20 10:01:03.668489 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-20 10:01:03.668495 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-20 10:01:03.668502 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-20 10:01:03.668509 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-20 10:01:03.668515 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.668522 | orchestrator | 2025-09-20 10:01:03.668528 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-20 10:01:03.668535 | orchestrator | 2025-09-20 10:01:03.668542 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-20 10:01:03.668548 | orchestrator | Saturday 20 September 2025 10:00:59 +0000 (0:00:01.414) 0:09:01.387 **** 2025-09-20 10:01:03.668555 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-20 10:01:03.668562 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-20 10:01:03.668568 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.668575 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-20 10:01:03.668581 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-20 10:01:03.668588 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.668595 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-20 10:01:03.668601 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-20 10:01:03.668608 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.668614 | orchestrator | 2025-09-20 10:01:03.668621 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-20 10:01:03.668627 | orchestrator | 2025-09-20 10:01:03.668634 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-20 10:01:03.668640 | orchestrator | Saturday 20 September 2025 10:01:00 +0000 (0:00:00.780) 0:09:02.167 **** 2025-09-20 10:01:03.668647 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.668659 | orchestrator | 2025-09-20 10:01:03.668666 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-20 10:01:03.668672 | orchestrator | 2025-09-20 10:01:03.668679 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-20 10:01:03.668686 | orchestrator | Saturday 20 September 2025 10:01:00 +0000 (0:00:00.758) 0:09:02.925 **** 2025-09-20 10:01:03.668692 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:01:03.668699 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:01:03.668705 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:01:03.668712 | orchestrator | 2025-09-20 10:01:03.668718 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:01:03.668725 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:01:03.668736 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-20 10:01:03.668743 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-20 10:01:03.668750 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-20 10:01:03.668757 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-20 10:01:03.668763 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-20 10:01:03.668770 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-20 10:01:03.668777 | orchestrator | 2025-09-20 10:01:03.668783 | orchestrator | 2025-09-20 10:01:03.668790 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:01:03.668797 | orchestrator | Saturday 20 September 2025 10:01:01 +0000 (0:00:00.490) 0:09:03.416 **** 2025-09-20 10:01:03.668804 | orchestrator | =============================================================================== 2025-09-20 10:01:03.668810 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.99s 2025-09-20 10:01:03.668817 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.84s 2025-09-20 10:01:03.668823 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 27.46s 2025-09-20 10:01:03.668830 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.03s 2025-09-20 10:01:03.668836 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.82s 2025-09-20 10:01:03.668843 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.13s 2025-09-20 10:01:03.668850 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.87s 2025-09-20 10:01:03.668856 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.13s 2025-09-20 10:01:03.668863 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.13s 2025-09-20 10:01:03.668869 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.54s 2025-09-20 10:01:03.668879 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.93s 2025-09-20 10:01:03.668886 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.36s 2025-09-20 10:01:03.668893 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.72s 2025-09-20 10:01:03.668899 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.08s 2025-09-20 10:01:03.668906 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.84s 2025-09-20 10:01:03.668918 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.63s 2025-09-20 10:01:03.668924 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.21s 2025-09-20 10:01:03.668931 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.16s 2025-09-20 10:01:03.668938 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.10s 2025-09-20 10:01:03.668944 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 8.45s 2025-09-20 10:01:03.668951 | orchestrator | 2025-09-20 10:01:03 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:03.668958 | orchestrator | 2025-09-20 10:01:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:06.702823 | orchestrator | 2025-09-20 10:01:06 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:06.702926 | orchestrator | 2025-09-20 10:01:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:09.749112 | orchestrator | 2025-09-20 10:01:09 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:09.749254 | orchestrator | 2025-09-20 10:01:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:12.791019 | orchestrator | 2025-09-20 10:01:12 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:12.791119 | orchestrator | 2025-09-20 10:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:15.827925 | orchestrator | 2025-09-20 10:01:15 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:15.828042 | orchestrator | 2025-09-20 10:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:18.867271 | orchestrator | 2025-09-20 10:01:18 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:18.867375 | orchestrator | 2025-09-20 10:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:21.917949 | orchestrator | 2025-09-20 10:01:21 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:21.918106 | orchestrator | 2025-09-20 10:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:24.958398 | orchestrator | 2025-09-20 10:01:24 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:24.958503 | orchestrator | 2025-09-20 10:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:27.998753 | orchestrator | 2025-09-20 10:01:27 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:27.998870 | orchestrator | 2025-09-20 10:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:31.044896 | orchestrator | 2025-09-20 10:01:31 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:31.044995 | orchestrator | 2025-09-20 10:01:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:34.092619 | orchestrator | 2025-09-20 10:01:34 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:34.092720 | orchestrator | 2025-09-20 10:01:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:37.133865 | orchestrator | 2025-09-20 10:01:37 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:37.133967 | orchestrator | 2025-09-20 10:01:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:40.182656 | orchestrator | 2025-09-20 10:01:40 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:40.182762 | orchestrator | 2025-09-20 10:01:40 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:43.229463 | orchestrator | 2025-09-20 10:01:43 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:43.229556 | orchestrator | 2025-09-20 10:01:43 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:46.272043 | orchestrator | 2025-09-20 10:01:46 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:46.272147 | orchestrator | 2025-09-20 10:01:46 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:49.311917 | orchestrator | 2025-09-20 10:01:49 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:49.312023 | orchestrator | 2025-09-20 10:01:49 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:52.355149 | orchestrator | 2025-09-20 10:01:52 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:52.355287 | orchestrator | 2025-09-20 10:01:52 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:55.400756 | orchestrator | 2025-09-20 10:01:55 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:55.400876 | orchestrator | 2025-09-20 10:01:55 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:01:58.437637 | orchestrator | 2025-09-20 10:01:58 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:01:58.437739 | orchestrator | 2025-09-20 10:01:58 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:01.471354 | orchestrator | 2025-09-20 10:02:01 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:01.471463 | orchestrator | 2025-09-20 10:02:01 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:04.520498 | orchestrator | 2025-09-20 10:02:04 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:04.520616 | orchestrator | 2025-09-20 10:02:04 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:07.561597 | orchestrator | 2025-09-20 10:02:07 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:07.561722 | orchestrator | 2025-09-20 10:02:07 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:10.611940 | orchestrator | 2025-09-20 10:02:10 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:10.612021 | orchestrator | 2025-09-20 10:02:10 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:13.661367 | orchestrator | 2025-09-20 10:02:13 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:13.661469 | orchestrator | 2025-09-20 10:02:13 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:16.699535 | orchestrator | 2025-09-20 10:02:16 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:16.699638 | orchestrator | 2025-09-20 10:02:16 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:19.743567 | orchestrator | 2025-09-20 10:02:19 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:19.743665 | orchestrator | 2025-09-20 10:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:22.779710 | orchestrator | 2025-09-20 10:02:22 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:22.779808 | orchestrator | 2025-09-20 10:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:25.823565 | orchestrator | 2025-09-20 10:02:25 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:25.823638 | orchestrator | 2025-09-20 10:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:28.868465 | orchestrator | 2025-09-20 10:02:28 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:28.868575 | orchestrator | 2025-09-20 10:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:31.908772 | orchestrator | 2025-09-20 10:02:31 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:31.908869 | orchestrator | 2025-09-20 10:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:34.944744 | orchestrator | 2025-09-20 10:02:34 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:34.944854 | orchestrator | 2025-09-20 10:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:37.986697 | orchestrator | 2025-09-20 10:02:37 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:37.986802 | orchestrator | 2025-09-20 10:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:41.031391 | orchestrator | 2025-09-20 10:02:41 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:41.031509 | orchestrator | 2025-09-20 10:02:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:44.087708 | orchestrator | 2025-09-20 10:02:44 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:44.087805 | orchestrator | 2025-09-20 10:02:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:47.134434 | orchestrator | 2025-09-20 10:02:47 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:47.134517 | orchestrator | 2025-09-20 10:02:47 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:50.181494 | orchestrator | 2025-09-20 10:02:50 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:50.181587 | orchestrator | 2025-09-20 10:02:50 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:53.229386 | orchestrator | 2025-09-20 10:02:53 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:53.229479 | orchestrator | 2025-09-20 10:02:53 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:56.275171 | orchestrator | 2025-09-20 10:02:56 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:56.275327 | orchestrator | 2025-09-20 10:02:56 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:02:59.321127 | orchestrator | 2025-09-20 10:02:59 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:02:59.321266 | orchestrator | 2025-09-20 10:02:59 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:02.378401 | orchestrator | 2025-09-20 10:03:02 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:02.378555 | orchestrator | 2025-09-20 10:03:02 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:05.422992 | orchestrator | 2025-09-20 10:03:05 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:05.423090 | orchestrator | 2025-09-20 10:03:05 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:08.465608 | orchestrator | 2025-09-20 10:03:08 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:08.465713 | orchestrator | 2025-09-20 10:03:08 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:11.507189 | orchestrator | 2025-09-20 10:03:11 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:11.507339 | orchestrator | 2025-09-20 10:03:11 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:14.555771 | orchestrator | 2025-09-20 10:03:14 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:14.555861 | orchestrator | 2025-09-20 10:03:14 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:17.602423 | orchestrator | 2025-09-20 10:03:17 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:17.602523 | orchestrator | 2025-09-20 10:03:17 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:20.650326 | orchestrator | 2025-09-20 10:03:20 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:20.650427 | orchestrator | 2025-09-20 10:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:23.688579 | orchestrator | 2025-09-20 10:03:23 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:23.688674 | orchestrator | 2025-09-20 10:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:26.726340 | orchestrator | 2025-09-20 10:03:26 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:26.726435 | orchestrator | 2025-09-20 10:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:29.772537 | orchestrator | 2025-09-20 10:03:29 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:29.772621 | orchestrator | 2025-09-20 10:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:32.819286 | orchestrator | 2025-09-20 10:03:32 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:32.819375 | orchestrator | 2025-09-20 10:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:35.863036 | orchestrator | 2025-09-20 10:03:35 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:35.863135 | orchestrator | 2025-09-20 10:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:38.904180 | orchestrator | 2025-09-20 10:03:38 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:38.904324 | orchestrator | 2025-09-20 10:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:41.949207 | orchestrator | 2025-09-20 10:03:41 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:41.949356 | orchestrator | 2025-09-20 10:03:41 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:44.997573 | orchestrator | 2025-09-20 10:03:44 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:44.997679 | orchestrator | 2025-09-20 10:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:48.053603 | orchestrator | 2025-09-20 10:03:48 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:48.053698 | orchestrator | 2025-09-20 10:03:48 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:51.106883 | orchestrator | 2025-09-20 10:03:51 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:51.106981 | orchestrator | 2025-09-20 10:03:51 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:54.151130 | orchestrator | 2025-09-20 10:03:54 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:54.151274 | orchestrator | 2025-09-20 10:03:54 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:03:57.205116 | orchestrator | 2025-09-20 10:03:57 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:03:57.205211 | orchestrator | 2025-09-20 10:03:57 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:00.242937 | orchestrator | 2025-09-20 10:04:00 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:00.243027 | orchestrator | 2025-09-20 10:04:00 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:03.288667 | orchestrator | 2025-09-20 10:04:03 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:03.288771 | orchestrator | 2025-09-20 10:04:03 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:06.332199 | orchestrator | 2025-09-20 10:04:06 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:06.332337 | orchestrator | 2025-09-20 10:04:06 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:09.369885 | orchestrator | 2025-09-20 10:04:09 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:09.369982 | orchestrator | 2025-09-20 10:04:09 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:12.403700 | orchestrator | 2025-09-20 10:04:12 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:12.403802 | orchestrator | 2025-09-20 10:04:12 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:15.435667 | orchestrator | 2025-09-20 10:04:15 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:15.435774 | orchestrator | 2025-09-20 10:04:15 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:18.476625 | orchestrator | 2025-09-20 10:04:18 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state STARTED 2025-09-20 10:04:18.476731 | orchestrator | 2025-09-20 10:04:18 | INFO  | Wait 1 second(s) until the next check 2025-09-20 10:04:21.526785 | orchestrator | 2025-09-20 10:04:21 | INFO  | Task 60c91f15-f6b6-4e5c-890b-f93698f7803d is in state SUCCESS 2025-09-20 10:04:21.529676 | orchestrator | 2025-09-20 10:04:21.529781 | orchestrator | 2025-09-20 10:04:21.529795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:04:21.529807 | orchestrator | 2025-09-20 10:04:21.529818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:04:21.530687 | orchestrator | Saturday 20 September 2025 09:59:24 +0000 (0:00:00.305) 0:00:00.305 **** 2025-09-20 10:04:21.530704 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.530716 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:04:21.530727 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:04:21.530756 | orchestrator | 2025-09-20 10:04:21.530767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:04:21.530780 | orchestrator | Saturday 20 September 2025 09:59:24 +0000 (0:00:00.314) 0:00:00.620 **** 2025-09-20 10:04:21.530791 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-20 10:04:21.530803 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-20 10:04:21.530813 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-20 10:04:21.530824 | orchestrator | 2025-09-20 10:04:21.530835 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-20 10:04:21.530846 | orchestrator | 2025-09-20 10:04:21.530857 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 10:04:21.530868 | orchestrator | Saturday 20 September 2025 09:59:24 +0000 (0:00:00.443) 0:00:01.064 **** 2025-09-20 10:04:21.530879 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:04:21.530891 | orchestrator | 2025-09-20 10:04:21.530902 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-20 10:04:21.530913 | orchestrator | Saturday 20 September 2025 09:59:25 +0000 (0:00:00.626) 0:00:01.691 **** 2025-09-20 10:04:21.530924 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-20 10:04:21.530964 | orchestrator | 2025-09-20 10:04:21.530975 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-20 10:04:21.530986 | orchestrator | Saturday 20 September 2025 09:59:29 +0000 (0:00:03.957) 0:00:05.648 **** 2025-09-20 10:04:21.531011 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-20 10:04:21.531022 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-20 10:04:21.531033 | orchestrator | 2025-09-20 10:04:21.531043 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-20 10:04:21.531054 | orchestrator | Saturday 20 September 2025 09:59:36 +0000 (0:00:07.112) 0:00:12.761 **** 2025-09-20 10:04:21.531065 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-20 10:04:21.531075 | orchestrator | 2025-09-20 10:04:21.531086 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-20 10:04:21.531097 | orchestrator | Saturday 20 September 2025 09:59:39 +0000 (0:00:03.197) 0:00:15.959 **** 2025-09-20 10:04:21.531107 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-20 10:04:21.531118 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-20 10:04:21.531129 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-20 10:04:21.531140 | orchestrator | 2025-09-20 10:04:21.531151 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-20 10:04:21.531162 | orchestrator | Saturday 20 September 2025 09:59:48 +0000 (0:00:08.319) 0:00:24.278 **** 2025-09-20 10:04:21.531172 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-20 10:04:21.531183 | orchestrator | 2025-09-20 10:04:21.531194 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-20 10:04:21.531204 | orchestrator | Saturday 20 September 2025 09:59:51 +0000 (0:00:03.431) 0:00:27.709 **** 2025-09-20 10:04:21.531215 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-20 10:04:21.531226 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-20 10:04:21.531263 | orchestrator | 2025-09-20 10:04:21.531274 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-20 10:04:21.531284 | orchestrator | Saturday 20 September 2025 10:00:00 +0000 (0:00:08.998) 0:00:36.708 **** 2025-09-20 10:04:21.531295 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-20 10:04:21.531306 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-20 10:04:21.531316 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-20 10:04:21.531330 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-20 10:04:21.531342 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-20 10:04:21.531355 | orchestrator | 2025-09-20 10:04:21.531367 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 10:04:21.531379 | orchestrator | Saturday 20 September 2025 10:00:17 +0000 (0:00:16.741) 0:00:53.450 **** 2025-09-20 10:04:21.531392 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:04:21.531404 | orchestrator | 2025-09-20 10:04:21.531417 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-20 10:04:21.531429 | orchestrator | Saturday 20 September 2025 10:00:18 +0000 (0:00:01.258) 0:00:54.708 **** 2025-09-20 10:04:21.531442 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.531454 | orchestrator | 2025-09-20 10:04:21.531466 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-20 10:04:21.531478 | orchestrator | Saturday 20 September 2025 10:00:23 +0000 (0:00:04.873) 0:00:59.582 **** 2025-09-20 10:04:21.531491 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.531503 | orchestrator | 2025-09-20 10:04:21.531515 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-20 10:04:21.531577 | orchestrator | Saturday 20 September 2025 10:00:27 +0000 (0:00:04.137) 0:01:03.720 **** 2025-09-20 10:04:21.531592 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.531604 | orchestrator | 2025-09-20 10:04:21.531616 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-20 10:04:21.531628 | orchestrator | Saturday 20 September 2025 10:00:31 +0000 (0:00:03.383) 0:01:07.104 **** 2025-09-20 10:04:21.531641 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-20 10:04:21.531654 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-20 10:04:21.531667 | orchestrator | 2025-09-20 10:04:21.531678 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-20 10:04:21.531689 | orchestrator | Saturday 20 September 2025 10:00:41 +0000 (0:00:10.282) 0:01:17.386 **** 2025-09-20 10:04:21.531700 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-20 10:04:21.531711 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-20 10:04:21.531724 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-20 10:04:21.531736 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-20 10:04:21.531747 | orchestrator | 2025-09-20 10:04:21.531757 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-20 10:04:21.531768 | orchestrator | Saturday 20 September 2025 10:00:58 +0000 (0:00:17.242) 0:01:34.628 **** 2025-09-20 10:04:21.531779 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.531790 | orchestrator | 2025-09-20 10:04:21.531800 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-20 10:04:21.531811 | orchestrator | Saturday 20 September 2025 10:01:03 +0000 (0:00:04.641) 0:01:39.270 **** 2025-09-20 10:04:21.531828 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.531839 | orchestrator | 2025-09-20 10:04:21.531849 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-20 10:04:21.531860 | orchestrator | Saturday 20 September 2025 10:01:08 +0000 (0:00:05.798) 0:01:45.069 **** 2025-09-20 10:04:21.531871 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.531882 | orchestrator | 2025-09-20 10:04:21.531893 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-20 10:04:21.531903 | orchestrator | Saturday 20 September 2025 10:01:09 +0000 (0:00:00.206) 0:01:45.275 **** 2025-09-20 10:04:21.531914 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.531925 | orchestrator | 2025-09-20 10:04:21.531935 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 10:04:21.531946 | orchestrator | Saturday 20 September 2025 10:01:14 +0000 (0:00:05.289) 0:01:50.565 **** 2025-09-20 10:04:21.531957 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:04:21.531968 | orchestrator | 2025-09-20 10:04:21.531978 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-20 10:04:21.531989 | orchestrator | Saturday 20 September 2025 10:01:15 +0000 (0:00:01.107) 0:01:51.672 **** 2025-09-20 10:04:21.531999 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532010 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532021 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532032 | orchestrator | 2025-09-20 10:04:21.532042 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-20 10:04:21.532053 | orchestrator | Saturday 20 September 2025 10:01:20 +0000 (0:00:05.353) 0:01:57.025 **** 2025-09-20 10:04:21.532064 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532074 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532149 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532162 | orchestrator | 2025-09-20 10:04:21.532173 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-20 10:04:21.532184 | orchestrator | Saturday 20 September 2025 10:01:25 +0000 (0:00:04.878) 0:02:01.904 **** 2025-09-20 10:04:21.532195 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532206 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532217 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532272 | orchestrator | 2025-09-20 10:04:21.532285 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-20 10:04:21.532296 | orchestrator | Saturday 20 September 2025 10:01:26 +0000 (0:00:00.807) 0:02:02.711 **** 2025-09-20 10:04:21.532307 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:04:21.532318 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.532329 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:04:21.532340 | orchestrator | 2025-09-20 10:04:21.532351 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-20 10:04:21.532361 | orchestrator | Saturday 20 September 2025 10:01:28 +0000 (0:00:02.075) 0:02:04.787 **** 2025-09-20 10:04:21.532372 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532383 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532394 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532404 | orchestrator | 2025-09-20 10:04:21.532415 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-20 10:04:21.532426 | orchestrator | Saturday 20 September 2025 10:01:29 +0000 (0:00:01.264) 0:02:06.051 **** 2025-09-20 10:04:21.532437 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532447 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532458 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532469 | orchestrator | 2025-09-20 10:04:21.532480 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-20 10:04:21.532491 | orchestrator | Saturday 20 September 2025 10:01:31 +0000 (0:00:01.301) 0:02:07.353 **** 2025-09-20 10:04:21.532501 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532512 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532527 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532539 | orchestrator | 2025-09-20 10:04:21.532584 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-20 10:04:21.532596 | orchestrator | Saturday 20 September 2025 10:01:33 +0000 (0:00:01.999) 0:02:09.352 **** 2025-09-20 10:04:21.532607 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.532618 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.532628 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.532639 | orchestrator | 2025-09-20 10:04:21.532650 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-20 10:04:21.532660 | orchestrator | Saturday 20 September 2025 10:01:34 +0000 (0:00:01.565) 0:02:10.918 **** 2025-09-20 10:04:21.532671 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.532682 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:04:21.532692 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:04:21.532703 | orchestrator | 2025-09-20 10:04:21.532714 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-20 10:04:21.532725 | orchestrator | Saturday 20 September 2025 10:01:35 +0000 (0:00:00.870) 0:02:11.788 **** 2025-09-20 10:04:21.532735 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:04:21.532746 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:04:21.532756 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.532767 | orchestrator | 2025-09-20 10:04:21.532778 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 10:04:21.532788 | orchestrator | Saturday 20 September 2025 10:01:38 +0000 (0:00:02.878) 0:02:14.667 **** 2025-09-20 10:04:21.532799 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:04:21.532810 | orchestrator | 2025-09-20 10:04:21.532821 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-20 10:04:21.532840 | orchestrator | Saturday 20 September 2025 10:01:39 +0000 (0:00:00.540) 0:02:15.207 **** 2025-09-20 10:04:21.532850 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.532861 | orchestrator | 2025-09-20 10:04:21.532872 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-20 10:04:21.532883 | orchestrator | Saturday 20 September 2025 10:01:43 +0000 (0:00:03.967) 0:02:19.174 **** 2025-09-20 10:04:21.532893 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.532904 | orchestrator | 2025-09-20 10:04:21.532921 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-20 10:04:21.532932 | orchestrator | Saturday 20 September 2025 10:01:46 +0000 (0:00:03.115) 0:02:22.289 **** 2025-09-20 10:04:21.532943 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-20 10:04:21.532954 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-20 10:04:21.532965 | orchestrator | 2025-09-20 10:04:21.532976 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-20 10:04:21.532987 | orchestrator | Saturday 20 September 2025 10:01:52 +0000 (0:00:06.631) 0:02:28.921 **** 2025-09-20 10:04:21.532997 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.533008 | orchestrator | 2025-09-20 10:04:21.533019 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-20 10:04:21.533029 | orchestrator | Saturday 20 September 2025 10:01:56 +0000 (0:00:03.319) 0:02:32.240 **** 2025-09-20 10:04:21.533040 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:04:21.533051 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:04:21.533061 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:04:21.533072 | orchestrator | 2025-09-20 10:04:21.533082 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-20 10:04:21.533093 | orchestrator | Saturday 20 September 2025 10:01:56 +0000 (0:00:00.304) 0:02:32.545 **** 2025-09-20 10:04:21.533107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.533149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.533163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.533187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.533200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.533212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.533224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.533387 | orchestrator | 2025-09-20 10:04:21.533398 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-20 10:04:21.533409 | orchestrator | Saturday 20 September 2025 10:01:58 +0000 (0:00:02.309) 0:02:34.854 **** 2025-09-20 10:04:21.533426 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.533437 | orchestrator | 2025-09-20 10:04:21.533472 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-20 10:04:21.533484 | orchestrator | Saturday 20 September 2025 10:01:58 +0000 (0:00:00.117) 0:02:34.972 **** 2025-09-20 10:04:21.533495 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.533506 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:04:21.533517 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:04:21.533528 | orchestrator | 2025-09-20 10:04:21.533538 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-20 10:04:21.533549 | orchestrator | Saturday 20 September 2025 10:01:59 +0000 (0:00:00.497) 0:02:35.470 **** 2025-09-20 10:04:21.533560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.533577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.533589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.533601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.533612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.533630 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.533668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.533681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.533704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.533715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.533727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.533738 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:04:21.533750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.533796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.533809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.533825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.533837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.533848 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:04:21.533859 | orchestrator | 2025-09-20 10:04:21.533870 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 10:04:21.533881 | orchestrator | Saturday 20 September 2025 10:02:00 +0000 (0:00:00.733) 0:02:36.204 **** 2025-09-20 10:04:21.533892 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:04:21.533903 | orchestrator | 2025-09-20 10:04:21.533913 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-20 10:04:21.533924 | orchestrator | Saturday 20 September 2025 10:02:00 +0000 (0:00:00.576) 0:02:36.780 **** 2025-09-20 10:04:21.533936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.533980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 2025-09-20 10:04:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:21.533994 | orchestrator | 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.534007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.534054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.534069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.534081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.534100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534308 | orchestrator | 2025-09-20 10:04:21.534320 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-20 10:04:21.534331 | orchestrator | Saturday 20 September 2025 10:02:05 +0000 (0:00:05.226) 0:02:42.007 **** 2025-09-20 10:04:21.534342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.534359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.534371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.534413 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.534434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.534446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.534463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.534503 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:04:21.534515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.534533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.534545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.534589 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:04:21.534600 | orchestrator | 2025-09-20 10:04:21.534611 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-20 10:04:21.534622 | orchestrator | Saturday 20 September 2025 10:02:06 +0000 (0:00:00.931) 0:02:42.938 **** 2025-09-20 10:04:21.534633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.534644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.534661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.534695 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.534711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.534732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.534744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.534784 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:04:21.534800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-20 10:04:21.534817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-20 10:04:21.534827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-20 10:04:21.534847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-20 10:04:21.534857 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:04:21.534867 | orchestrator | 2025-09-20 10:04:21.534877 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-20 10:04:21.534886 | orchestrator | Saturday 20 September 2025 10:02:07 +0000 (0:00:00.851) 0:02:43.790 **** 2025-09-20 10:04:21.534903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.534918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.534934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.534945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.534955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.534971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.534981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.534991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535096 | orchestrator | 2025-09-20 10:04:21.535110 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-20 10:04:21.535120 | orchestrator | Saturday 20 September 2025 10:02:12 +0000 (0:00:05.216) 0:02:49.007 **** 2025-09-20 10:04:21.535130 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-20 10:04:21.535139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-20 10:04:21.535149 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-20 10:04:21.535159 | orchestrator | 2025-09-20 10:04:21.535168 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-20 10:04:21.535178 | orchestrator | Saturday 20 September 2025 10:02:15 +0000 (0:00:02.152) 0:02:51.159 **** 2025-09-20 10:04:21.535187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.535198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.535214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.535246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.535261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.535272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.535282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.535388 | orchestrator | 2025-09-20 10:04:21.535398 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-20 10:04:21.535407 | orchestrator | Saturday 20 September 2025 10:02:30 +0000 (0:00:15.943) 0:03:07.103 **** 2025-09-20 10:04:21.535417 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.535427 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.535436 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.535446 | orchestrator | 2025-09-20 10:04:21.535455 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-20 10:04:21.535471 | orchestrator | Saturday 20 September 2025 10:02:32 +0000 (0:00:01.524) 0:03:08.627 **** 2025-09-20 10:04:21.535485 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535495 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535504 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535514 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535523 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535533 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535542 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535551 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535561 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535570 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535580 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535589 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535599 | orchestrator | 2025-09-20 10:04:21.535608 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-20 10:04:21.535618 | orchestrator | Saturday 20 September 2025 10:02:37 +0000 (0:00:05.160) 0:03:13.788 **** 2025-09-20 10:04:21.535627 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535637 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535646 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535656 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535665 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535675 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535685 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535694 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535708 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535718 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535727 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535736 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535746 | orchestrator | 2025-09-20 10:04:21.535756 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-20 10:04:21.535765 | orchestrator | Saturday 20 September 2025 10:02:43 +0000 (0:00:05.367) 0:03:19.155 **** 2025-09-20 10:04:21.535775 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535784 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535794 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-20 10:04:21.535803 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535812 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535822 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-20 10:04:21.535831 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535841 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535850 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-20 10:04:21.535859 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535869 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535878 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-20 10:04:21.535896 | orchestrator | 2025-09-20 10:04:21.535906 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-20 10:04:21.535916 | orchestrator | Saturday 20 September 2025 10:02:48 +0000 (0:00:05.608) 0:03:24.764 **** 2025-09-20 10:04:21.535925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.535941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.535956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-20 10:04:21.535967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.535977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.535995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-20 10:04:21.536005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-20 10:04:21.536120 | orchestrator | 2025-09-20 10:04:21.536129 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-20 10:04:21.536139 | orchestrator | Saturday 20 September 2025 10:02:52 +0000 (0:00:03.805) 0:03:28.569 **** 2025-09-20 10:04:21.536149 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:04:21.536158 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:04:21.536168 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:04:21.536177 | orchestrator | 2025-09-20 10:04:21.536187 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-20 10:04:21.536197 | orchestrator | Saturday 20 September 2025 10:02:52 +0000 (0:00:00.329) 0:03:28.899 **** 2025-09-20 10:04:21.536206 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536216 | orchestrator | 2025-09-20 10:04:21.536225 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-20 10:04:21.536250 | orchestrator | Saturday 20 September 2025 10:02:54 +0000 (0:00:02.032) 0:03:30.931 **** 2025-09-20 10:04:21.536259 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536269 | orchestrator | 2025-09-20 10:04:21.536279 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-20 10:04:21.536289 | orchestrator | Saturday 20 September 2025 10:02:56 +0000 (0:00:02.041) 0:03:32.973 **** 2025-09-20 10:04:21.536298 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536308 | orchestrator | 2025-09-20 10:04:21.536317 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-20 10:04:21.536327 | orchestrator | Saturday 20 September 2025 10:02:59 +0000 (0:00:02.153) 0:03:35.126 **** 2025-09-20 10:04:21.536336 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536346 | orchestrator | 2025-09-20 10:04:21.536356 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-20 10:04:21.536375 | orchestrator | Saturday 20 September 2025 10:03:01 +0000 (0:00:02.173) 0:03:37.300 **** 2025-09-20 10:04:21.536385 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536395 | orchestrator | 2025-09-20 10:04:21.536404 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-20 10:04:21.536414 | orchestrator | Saturday 20 September 2025 10:03:22 +0000 (0:00:21.076) 0:03:58.376 **** 2025-09-20 10:04:21.536423 | orchestrator | 2025-09-20 10:04:21.536433 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-20 10:04:21.536442 | orchestrator | Saturday 20 September 2025 10:03:22 +0000 (0:00:00.067) 0:03:58.444 **** 2025-09-20 10:04:21.536452 | orchestrator | 2025-09-20 10:04:21.536462 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-20 10:04:21.536471 | orchestrator | Saturday 20 September 2025 10:03:22 +0000 (0:00:00.067) 0:03:58.511 **** 2025-09-20 10:04:21.536481 | orchestrator | 2025-09-20 10:04:21.536490 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-20 10:04:21.536500 | orchestrator | Saturday 20 September 2025 10:03:22 +0000 (0:00:00.064) 0:03:58.575 **** 2025-09-20 10:04:21.536509 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536519 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.536529 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.536538 | orchestrator | 2025-09-20 10:04:21.536548 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-20 10:04:21.536558 | orchestrator | Saturday 20 September 2025 10:03:39 +0000 (0:00:16.543) 0:04:15.119 **** 2025-09-20 10:04:21.536567 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.536577 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.536586 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536596 | orchestrator | 2025-09-20 10:04:21.536605 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-20 10:04:21.536615 | orchestrator | Saturday 20 September 2025 10:03:47 +0000 (0:00:08.068) 0:04:23.188 **** 2025-09-20 10:04:21.536624 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536634 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.536643 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.536653 | orchestrator | 2025-09-20 10:04:21.536662 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-20 10:04:21.536672 | orchestrator | Saturday 20 September 2025 10:03:57 +0000 (0:00:10.427) 0:04:33.616 **** 2025-09-20 10:04:21.536682 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536691 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.536701 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.536710 | orchestrator | 2025-09-20 10:04:21.536720 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-20 10:04:21.536730 | orchestrator | Saturday 20 September 2025 10:04:08 +0000 (0:00:10.517) 0:04:44.133 **** 2025-09-20 10:04:21.536739 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:04:21.536748 | orchestrator | changed: [testbed-node-1] 2025-09-20 10:04:21.536758 | orchestrator | changed: [testbed-node-2] 2025-09-20 10:04:21.536767 | orchestrator | 2025-09-20 10:04:21.536777 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:04:21.536787 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-20 10:04:21.536797 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:04:21.536807 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-20 10:04:21.536817 | orchestrator | 2025-09-20 10:04:21.536827 | orchestrator | 2025-09-20 10:04:21.536836 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:04:21.536852 | orchestrator | Saturday 20 September 2025 10:04:18 +0000 (0:00:10.407) 0:04:54.541 **** 2025-09-20 10:04:21.536868 | orchestrator | =============================================================================== 2025-09-20 10:04:21.536877 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.08s 2025-09-20 10:04:21.536887 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.24s 2025-09-20 10:04:21.536896 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.74s 2025-09-20 10:04:21.536905 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.54s 2025-09-20 10:04:21.536915 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.94s 2025-09-20 10:04:21.536924 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.52s 2025-09-20 10:04:21.536934 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.43s 2025-09-20 10:04:21.536943 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.41s 2025-09-20 10:04:21.536952 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.28s 2025-09-20 10:04:21.536962 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 9.00s 2025-09-20 10:04:21.536971 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.32s 2025-09-20 10:04:21.536981 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.07s 2025-09-20 10:04:21.536990 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.11s 2025-09-20 10:04:21.537000 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.63s 2025-09-20 10:04:21.537009 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.80s 2025-09-20 10:04:21.537018 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.61s 2025-09-20 10:04:21.537032 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.37s 2025-09-20 10:04:21.537042 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.35s 2025-09-20 10:04:21.537051 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 5.29s 2025-09-20 10:04:21.537061 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.23s 2025-09-20 10:04:24.568745 | orchestrator | 2025-09-20 10:04:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:27.615766 | orchestrator | 2025-09-20 10:04:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:30.658689 | orchestrator | 2025-09-20 10:04:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:33.699098 | orchestrator | 2025-09-20 10:04:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:36.742906 | orchestrator | 2025-09-20 10:04:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:39.777694 | orchestrator | 2025-09-20 10:04:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:42.812184 | orchestrator | 2025-09-20 10:04:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:45.855602 | orchestrator | 2025-09-20 10:04:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:48.896161 | orchestrator | 2025-09-20 10:04:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:51.941154 | orchestrator | 2025-09-20 10:04:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:54.978866 | orchestrator | 2025-09-20 10:04:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:04:58.020212 | orchestrator | 2025-09-20 10:04:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:01.062335 | orchestrator | 2025-09-20 10:05:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:04.106234 | orchestrator | 2025-09-20 10:05:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:07.151147 | orchestrator | 2025-09-20 10:05:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:10.197712 | orchestrator | 2025-09-20 10:05:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:13.233952 | orchestrator | 2025-09-20 10:05:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:16.276761 | orchestrator | 2025-09-20 10:05:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:19.312002 | orchestrator | 2025-09-20 10:05:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-20 10:05:22.351351 | orchestrator | 2025-09-20 10:05:22.677592 | orchestrator | 2025-09-20 10:05:22.684864 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Sep 20 10:05:22 UTC 2025 2025-09-20 10:05:22.684959 | orchestrator | 2025-09-20 10:05:23.078991 | orchestrator | ok: Runtime: 0:34:17.660749 2025-09-20 10:05:23.342222 | 2025-09-20 10:05:23.342363 | TASK [Bootstrap services] 2025-09-20 10:05:24.076439 | orchestrator | 2025-09-20 10:05:24.076633 | orchestrator | # BOOTSTRAP 2025-09-20 10:05:24.076658 | orchestrator | 2025-09-20 10:05:24.076672 | orchestrator | + set -e 2025-09-20 10:05:24.076685 | orchestrator | + echo 2025-09-20 10:05:24.076700 | orchestrator | + echo '# BOOTSTRAP' 2025-09-20 10:05:24.076717 | orchestrator | + echo 2025-09-20 10:05:24.076762 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-20 10:05:24.086617 | orchestrator | + set -e 2025-09-20 10:05:24.086676 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-20 10:05:28.786050 | orchestrator | 2025-09-20 10:05:28 | INFO  | It takes a moment until task ae352846-bc44-4991-9f17-dc7dc1dca232 (flavor-manager) has been started and output is visible here. 2025-09-20 10:05:37.369701 | orchestrator | 2025-09-20 10:05:32 | INFO  | Flavor SCS-1L-1 created 2025-09-20 10:05:37.369747 | orchestrator | 2025-09-20 10:05:32 | INFO  | Flavor SCS-1L-1-5 created 2025-09-20 10:05:37.369754 | orchestrator | 2025-09-20 10:05:33 | INFO  | Flavor SCS-1V-2 created 2025-09-20 10:05:37.369758 | orchestrator | 2025-09-20 10:05:33 | INFO  | Flavor SCS-1V-2-5 created 2025-09-20 10:05:37.369762 | orchestrator | 2025-09-20 10:05:33 | INFO  | Flavor SCS-1V-4 created 2025-09-20 10:05:37.369766 | orchestrator | 2025-09-20 10:05:33 | INFO  | Flavor SCS-1V-4-10 created 2025-09-20 10:05:37.369770 | orchestrator | 2025-09-20 10:05:33 | INFO  | Flavor SCS-1V-8 created 2025-09-20 10:05:37.369774 | orchestrator | 2025-09-20 10:05:33 | INFO  | Flavor SCS-1V-8-20 created 2025-09-20 10:05:37.369781 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-2V-4 created 2025-09-20 10:05:37.369785 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-2V-4-10 created 2025-09-20 10:05:37.369789 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-2V-8 created 2025-09-20 10:05:37.369793 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-2V-8-20 created 2025-09-20 10:05:37.369797 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-2V-16 created 2025-09-20 10:05:37.369800 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-2V-16-50 created 2025-09-20 10:05:37.369804 | orchestrator | 2025-09-20 10:05:34 | INFO  | Flavor SCS-4V-8 created 2025-09-20 10:05:37.369808 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-4V-8-20 created 2025-09-20 10:05:37.369812 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-4V-16 created 2025-09-20 10:05:37.369816 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-4V-16-50 created 2025-09-20 10:05:37.369820 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-4V-32 created 2025-09-20 10:05:37.369823 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-4V-32-100 created 2025-09-20 10:05:37.369827 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-8V-16 created 2025-09-20 10:05:37.369831 | orchestrator | 2025-09-20 10:05:35 | INFO  | Flavor SCS-8V-16-50 created 2025-09-20 10:05:37.369835 | orchestrator | 2025-09-20 10:05:36 | INFO  | Flavor SCS-8V-32 created 2025-09-20 10:05:37.369838 | orchestrator | 2025-09-20 10:05:36 | INFO  | Flavor SCS-8V-32-100 created 2025-09-20 10:05:37.369842 | orchestrator | 2025-09-20 10:05:36 | INFO  | Flavor SCS-16V-32 created 2025-09-20 10:05:37.369846 | orchestrator | 2025-09-20 10:05:36 | INFO  | Flavor SCS-16V-32-100 created 2025-09-20 10:05:37.369850 | orchestrator | 2025-09-20 10:05:36 | INFO  | Flavor SCS-2V-4-20s created 2025-09-20 10:05:37.369853 | orchestrator | 2025-09-20 10:05:36 | INFO  | Flavor SCS-4V-8-50s created 2025-09-20 10:05:37.369857 | orchestrator | 2025-09-20 10:05:37 | INFO  | Flavor SCS-8V-32-100s created 2025-09-20 10:05:39.602011 | orchestrator | 2025-09-20 10:05:39 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-20 10:05:49.814897 | orchestrator | 2025-09-20 10:05:49 | INFO  | Task b2df490a-0891-479c-a0d0-ae7b31f41d6e (bootstrap-basic) was prepared for execution. 2025-09-20 10:05:49.814995 | orchestrator | 2025-09-20 10:05:49 | INFO  | It takes a moment until task b2df490a-0891-479c-a0d0-ae7b31f41d6e (bootstrap-basic) has been started and output is visible here. 2025-09-20 10:06:52.444115 | orchestrator | 2025-09-20 10:06:52.444229 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-20 10:06:52.444243 | orchestrator | 2025-09-20 10:06:52.444252 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-20 10:06:52.444260 | orchestrator | Saturday 20 September 2025 10:05:53 +0000 (0:00:00.078) 0:00:00.078 **** 2025-09-20 10:06:52.444269 | orchestrator | ok: [localhost] 2025-09-20 10:06:52.444278 | orchestrator | 2025-09-20 10:06:52.444286 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-20 10:06:52.444293 | orchestrator | Saturday 20 September 2025 10:05:55 +0000 (0:00:01.917) 0:00:01.996 **** 2025-09-20 10:06:52.444302 | orchestrator | ok: [localhost] 2025-09-20 10:06:52.444309 | orchestrator | 2025-09-20 10:06:52.444317 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-20 10:06:52.444325 | orchestrator | Saturday 20 September 2025 10:06:06 +0000 (0:00:10.342) 0:00:12.339 **** 2025-09-20 10:06:52.444333 | orchestrator | changed: [localhost] 2025-09-20 10:06:52.444341 | orchestrator | 2025-09-20 10:06:52.444348 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-20 10:06:52.444356 | orchestrator | Saturday 20 September 2025 10:06:13 +0000 (0:00:07.423) 0:00:19.763 **** 2025-09-20 10:06:52.444364 | orchestrator | ok: [localhost] 2025-09-20 10:06:52.444372 | orchestrator | 2025-09-20 10:06:52.444380 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-20 10:06:52.444387 | orchestrator | Saturday 20 September 2025 10:06:21 +0000 (0:00:07.560) 0:00:27.323 **** 2025-09-20 10:06:52.444397 | orchestrator | changed: [localhost] 2025-09-20 10:06:52.444404 | orchestrator | 2025-09-20 10:06:52.444412 | orchestrator | TASK [Create public network] *************************************************** 2025-09-20 10:06:52.444420 | orchestrator | Saturday 20 September 2025 10:06:28 +0000 (0:00:07.139) 0:00:34.463 **** 2025-09-20 10:06:52.444428 | orchestrator | changed: [localhost] 2025-09-20 10:06:52.444435 | orchestrator | 2025-09-20 10:06:52.444443 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-20 10:06:52.444451 | orchestrator | Saturday 20 September 2025 10:06:33 +0000 (0:00:05.340) 0:00:39.804 **** 2025-09-20 10:06:52.444515 | orchestrator | changed: [localhost] 2025-09-20 10:06:52.444524 | orchestrator | 2025-09-20 10:06:52.444532 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-20 10:06:52.444550 | orchestrator | Saturday 20 September 2025 10:06:40 +0000 (0:00:06.483) 0:00:46.287 **** 2025-09-20 10:06:52.444557 | orchestrator | changed: [localhost] 2025-09-20 10:06:52.444565 | orchestrator | 2025-09-20 10:06:52.444573 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-20 10:06:52.444581 | orchestrator | Saturday 20 September 2025 10:06:44 +0000 (0:00:04.470) 0:00:50.757 **** 2025-09-20 10:06:52.444589 | orchestrator | changed: [localhost] 2025-09-20 10:06:52.444596 | orchestrator | 2025-09-20 10:06:52.444603 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-20 10:06:52.444611 | orchestrator | Saturday 20 September 2025 10:06:48 +0000 (0:00:03.923) 0:00:54.681 **** 2025-09-20 10:06:52.444619 | orchestrator | ok: [localhost] 2025-09-20 10:06:52.444627 | orchestrator | 2025-09-20 10:06:52.444635 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:06:52.444643 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:06:52.444653 | orchestrator | 2025-09-20 10:06:52.444661 | orchestrator | 2025-09-20 10:06:52.444670 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:06:52.444711 | orchestrator | Saturday 20 September 2025 10:06:52 +0000 (0:00:03.686) 0:00:58.368 **** 2025-09-20 10:06:52.444719 | orchestrator | =============================================================================== 2025-09-20 10:06:52.444726 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.34s 2025-09-20 10:06:52.444734 | orchestrator | Get volume type local --------------------------------------------------- 7.56s 2025-09-20 10:06:52.444742 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.42s 2025-09-20 10:06:52.444752 | orchestrator | Create volume type local ------------------------------------------------ 7.14s 2025-09-20 10:06:52.444761 | orchestrator | Set public network to default ------------------------------------------- 6.48s 2025-09-20 10:06:52.444771 | orchestrator | Create public network --------------------------------------------------- 5.34s 2025-09-20 10:06:52.444780 | orchestrator | Create public subnet ---------------------------------------------------- 4.47s 2025-09-20 10:06:52.444788 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.92s 2025-09-20 10:06:52.444796 | orchestrator | Create manager role ----------------------------------------------------- 3.69s 2025-09-20 10:06:52.444803 | orchestrator | Gathering Facts --------------------------------------------------------- 1.92s 2025-09-20 10:06:54.866848 | orchestrator | 2025-09-20 10:06:54 | INFO  | It takes a moment until task a741c8e0-7040-4b6d-9e8b-f1d148617e86 (image-manager) has been started and output is visible here. 2025-09-20 10:07:35.594982 | orchestrator | 2025-09-20 10:06:57 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-20 10:07:35.595089 | orchestrator | 2025-09-20 10:06:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-20 10:07:35.595100 | orchestrator | 2025-09-20 10:06:58 | INFO  | Importing image Cirros 0.6.2 2025-09-20 10:07:35.595107 | orchestrator | 2025-09-20 10:06:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-20 10:07:35.595115 | orchestrator | 2025-09-20 10:06:59 | INFO  | Waiting for image to leave queued state... 2025-09-20 10:07:35.595122 | orchestrator | 2025-09-20 10:07:01 | INFO  | Waiting for import to complete... 2025-09-20 10:07:35.595129 | orchestrator | 2025-09-20 10:07:12 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-20 10:07:35.595135 | orchestrator | 2025-09-20 10:07:12 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-20 10:07:35.595141 | orchestrator | 2025-09-20 10:07:12 | INFO  | Setting internal_version = 0.6.2 2025-09-20 10:07:35.595147 | orchestrator | 2025-09-20 10:07:12 | INFO  | Setting image_original_user = cirros 2025-09-20 10:07:35.595154 | orchestrator | 2025-09-20 10:07:12 | INFO  | Adding tag os:cirros 2025-09-20 10:07:35.595160 | orchestrator | 2025-09-20 10:07:12 | INFO  | Setting property architecture: x86_64 2025-09-20 10:07:35.595166 | orchestrator | 2025-09-20 10:07:12 | INFO  | Setting property hw_disk_bus: scsi 2025-09-20 10:07:35.595172 | orchestrator | 2025-09-20 10:07:13 | INFO  | Setting property hw_rng_model: virtio 2025-09-20 10:07:35.595178 | orchestrator | 2025-09-20 10:07:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-20 10:07:35.595184 | orchestrator | 2025-09-20 10:07:13 | INFO  | Setting property hw_watchdog_action: reset 2025-09-20 10:07:35.595190 | orchestrator | 2025-09-20 10:07:13 | INFO  | Setting property hypervisor_type: qemu 2025-09-20 10:07:35.595196 | orchestrator | 2025-09-20 10:07:14 | INFO  | Setting property os_distro: cirros 2025-09-20 10:07:35.595202 | orchestrator | 2025-09-20 10:07:14 | INFO  | Setting property os_purpose: minimal 2025-09-20 10:07:35.595208 | orchestrator | 2025-09-20 10:07:14 | INFO  | Setting property replace_frequency: never 2025-09-20 10:07:35.595235 | orchestrator | 2025-09-20 10:07:14 | INFO  | Setting property uuid_validity: none 2025-09-20 10:07:35.595241 | orchestrator | 2025-09-20 10:07:14 | INFO  | Setting property provided_until: none 2025-09-20 10:07:35.595253 | orchestrator | 2025-09-20 10:07:15 | INFO  | Setting property image_description: Cirros 2025-09-20 10:07:35.595262 | orchestrator | 2025-09-20 10:07:15 | INFO  | Setting property image_name: Cirros 2025-09-20 10:07:35.595268 | orchestrator | 2025-09-20 10:07:15 | INFO  | Setting property internal_version: 0.6.2 2025-09-20 10:07:35.595273 | orchestrator | 2025-09-20 10:07:15 | INFO  | Setting property image_original_user: cirros 2025-09-20 10:07:35.595279 | orchestrator | 2025-09-20 10:07:16 | INFO  | Setting property os_version: 0.6.2 2025-09-20 10:07:35.595285 | orchestrator | 2025-09-20 10:07:16 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-20 10:07:35.595292 | orchestrator | 2025-09-20 10:07:16 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-20 10:07:35.595298 | orchestrator | 2025-09-20 10:07:16 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-20 10:07:35.595303 | orchestrator | 2025-09-20 10:07:16 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-20 10:07:35.595308 | orchestrator | 2025-09-20 10:07:16 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-20 10:07:35.595314 | orchestrator | 2025-09-20 10:07:16 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-20 10:07:35.595320 | orchestrator | 2025-09-20 10:07:17 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-20 10:07:35.595325 | orchestrator | 2025-09-20 10:07:17 | INFO  | Importing image Cirros 0.6.3 2025-09-20 10:07:35.595331 | orchestrator | 2025-09-20 10:07:17 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-20 10:07:35.595336 | orchestrator | 2025-09-20 10:07:17 | INFO  | Waiting for image to leave queued state... 2025-09-20 10:07:35.595342 | orchestrator | 2025-09-20 10:07:19 | INFO  | Waiting for import to complete... 2025-09-20 10:07:35.595361 | orchestrator | 2025-09-20 10:07:29 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-20 10:07:35.595367 | orchestrator | 2025-09-20 10:07:30 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-20 10:07:35.595373 | orchestrator | 2025-09-20 10:07:30 | INFO  | Setting internal_version = 0.6.3 2025-09-20 10:07:35.595379 | orchestrator | 2025-09-20 10:07:30 | INFO  | Setting image_original_user = cirros 2025-09-20 10:07:35.595384 | orchestrator | 2025-09-20 10:07:30 | INFO  | Adding tag os:cirros 2025-09-20 10:07:35.595390 | orchestrator | 2025-09-20 10:07:30 | INFO  | Setting property architecture: x86_64 2025-09-20 10:07:35.595395 | orchestrator | 2025-09-20 10:07:30 | INFO  | Setting property hw_disk_bus: scsi 2025-09-20 10:07:35.595401 | orchestrator | 2025-09-20 10:07:31 | INFO  | Setting property hw_rng_model: virtio 2025-09-20 10:07:35.595406 | orchestrator | 2025-09-20 10:07:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-20 10:07:35.595412 | orchestrator | 2025-09-20 10:07:31 | INFO  | Setting property hw_watchdog_action: reset 2025-09-20 10:07:35.595417 | orchestrator | 2025-09-20 10:07:31 | INFO  | Setting property hypervisor_type: qemu 2025-09-20 10:07:35.595422 | orchestrator | 2025-09-20 10:07:32 | INFO  | Setting property os_distro: cirros 2025-09-20 10:07:35.595433 | orchestrator | 2025-09-20 10:07:32 | INFO  | Setting property os_purpose: minimal 2025-09-20 10:07:35.595438 | orchestrator | 2025-09-20 10:07:32 | INFO  | Setting property replace_frequency: never 2025-09-20 10:07:35.595444 | orchestrator | 2025-09-20 10:07:32 | INFO  | Setting property uuid_validity: none 2025-09-20 10:07:35.595449 | orchestrator | 2025-09-20 10:07:32 | INFO  | Setting property provided_until: none 2025-09-20 10:07:35.595454 | orchestrator | 2025-09-20 10:07:33 | INFO  | Setting property image_description: Cirros 2025-09-20 10:07:35.595460 | orchestrator | 2025-09-20 10:07:33 | INFO  | Setting property image_name: Cirros 2025-09-20 10:07:35.595465 | orchestrator | 2025-09-20 10:07:33 | INFO  | Setting property internal_version: 0.6.3 2025-09-20 10:07:35.595471 | orchestrator | 2025-09-20 10:07:33 | INFO  | Setting property image_original_user: cirros 2025-09-20 10:07:35.595476 | orchestrator | 2025-09-20 10:07:34 | INFO  | Setting property os_version: 0.6.3 2025-09-20 10:07:35.595482 | orchestrator | 2025-09-20 10:07:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-20 10:07:35.595487 | orchestrator | 2025-09-20 10:07:34 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-20 10:07:35.595495 | orchestrator | 2025-09-20 10:07:34 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-20 10:07:35.595501 | orchestrator | 2025-09-20 10:07:34 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-20 10:07:35.595506 | orchestrator | 2025-09-20 10:07:34 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-20 10:07:35.923669 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-20 10:07:38.279689 | orchestrator | 2025-09-20 10:07:38 | INFO  | date: 2025-09-20 2025-09-20 10:07:38.279773 | orchestrator | 2025-09-20 10:07:38 | INFO  | image: octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 10:07:38.279781 | orchestrator | 2025-09-20 10:07:38 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 10:07:38.280333 | orchestrator | 2025-09-20 10:07:38 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2.CHECKSUM 2025-09-20 10:07:38.344735 | orchestrator | 2025-09-20 10:07:38 | INFO  | checksum: 4a53cb1bf1a23bd8e5815bb881431d1645861d19364fb3db4890d7035d505565 2025-09-20 10:07:38.448958 | orchestrator | 2025-09-20 10:07:38 | INFO  | It takes a moment until task d60208b8-52fa-4ccb-a3ae-034203b4736e (image-manager) has been started and output is visible here. 2025-09-20 10:08:39.944435 | orchestrator | 2025-09-20 10:07:40 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 10:08:39.944673 | orchestrator | 2025-09-20 10:07:40 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2: 200 2025-09-20 10:08:39.944706 | orchestrator | 2025-09-20 10:07:40 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-20 2025-09-20 10:08:39.944719 | orchestrator | 2025-09-20 10:07:40 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 10:08:39.944732 | orchestrator | 2025-09-20 10:07:41 | INFO  | Waiting for image to leave queued state... 2025-09-20 10:08:39.944744 | orchestrator | 2025-09-20 10:07:44 | INFO  | Waiting for import to complete... 2025-09-20 10:08:39.944786 | orchestrator | 2025-09-20 10:07:54 | INFO  | Waiting for import to complete... 2025-09-20 10:08:39.944797 | orchestrator | 2025-09-20 10:08:04 | INFO  | Waiting for import to complete... 2025-09-20 10:08:39.944807 | orchestrator | 2025-09-20 10:08:14 | INFO  | Waiting for import to complete... 2025-09-20 10:08:39.944818 | orchestrator | 2025-09-20 10:08:24 | INFO  | Waiting for import to complete... 2025-09-20 10:08:39.944829 | orchestrator | 2025-09-20 10:08:34 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-20' successfully completed, reloading images 2025-09-20 10:08:39.944841 | orchestrator | 2025-09-20 10:08:35 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 10:08:39.944853 | orchestrator | 2025-09-20 10:08:35 | INFO  | Setting internal_version = 2025-09-20 2025-09-20 10:08:39.944863 | orchestrator | 2025-09-20 10:08:35 | INFO  | Setting image_original_user = ubuntu 2025-09-20 10:08:39.944875 | orchestrator | 2025-09-20 10:08:35 | INFO  | Adding tag amphora 2025-09-20 10:08:39.944886 | orchestrator | 2025-09-20 10:08:35 | INFO  | Adding tag os:ubuntu 2025-09-20 10:08:39.944896 | orchestrator | 2025-09-20 10:08:35 | INFO  | Setting property architecture: x86_64 2025-09-20 10:08:39.944907 | orchestrator | 2025-09-20 10:08:35 | INFO  | Setting property hw_disk_bus: scsi 2025-09-20 10:08:39.944917 | orchestrator | 2025-09-20 10:08:35 | INFO  | Setting property hw_rng_model: virtio 2025-09-20 10:08:39.944928 | orchestrator | 2025-09-20 10:08:36 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-20 10:08:39.944953 | orchestrator | 2025-09-20 10:08:36 | INFO  | Setting property hw_watchdog_action: reset 2025-09-20 10:08:39.944964 | orchestrator | 2025-09-20 10:08:36 | INFO  | Setting property hypervisor_type: qemu 2025-09-20 10:08:39.944993 | orchestrator | 2025-09-20 10:08:36 | INFO  | Setting property os_distro: ubuntu 2025-09-20 10:08:39.945005 | orchestrator | 2025-09-20 10:08:37 | INFO  | Setting property replace_frequency: quarterly 2025-09-20 10:08:39.945018 | orchestrator | 2025-09-20 10:08:37 | INFO  | Setting property uuid_validity: last-1 2025-09-20 10:08:39.945029 | orchestrator | 2025-09-20 10:08:37 | INFO  | Setting property provided_until: none 2025-09-20 10:08:39.945041 | orchestrator | 2025-09-20 10:08:37 | INFO  | Setting property os_purpose: network 2025-09-20 10:08:39.945053 | orchestrator | 2025-09-20 10:08:37 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-20 10:08:39.945066 | orchestrator | 2025-09-20 10:08:38 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-20 10:08:39.945079 | orchestrator | 2025-09-20 10:08:38 | INFO  | Setting property internal_version: 2025-09-20 2025-09-20 10:08:39.945089 | orchestrator | 2025-09-20 10:08:38 | INFO  | Setting property image_original_user: ubuntu 2025-09-20 10:08:39.945100 | orchestrator | 2025-09-20 10:08:38 | INFO  | Setting property os_version: 2025-09-20 2025-09-20 10:08:39.945112 | orchestrator | 2025-09-20 10:08:39 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250920.qcow2 2025-09-20 10:08:39.945123 | orchestrator | 2025-09-20 10:08:39 | INFO  | Setting property image_build_date: 2025-09-20 2025-09-20 10:08:39.945133 | orchestrator | 2025-09-20 10:08:39 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 10:08:39.945144 | orchestrator | 2025-09-20 10:08:39 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-20' 2025-09-20 10:08:39.945182 | orchestrator | 2025-09-20 10:08:39 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-20 10:08:39.945194 | orchestrator | 2025-09-20 10:08:39 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-20 10:08:39.945206 | orchestrator | 2025-09-20 10:08:39 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-20 10:08:39.945217 | orchestrator | 2025-09-20 10:08:39 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-20 10:08:40.550445 | orchestrator | ok: Runtime: 0:03:16.619139 2025-09-20 10:08:40.571813 | 2025-09-20 10:08:40.571953 | TASK [Run checks] 2025-09-20 10:08:41.290522 | orchestrator | + set -e 2025-09-20 10:08:41.290780 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 10:08:41.290811 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 10:08:41.290834 | orchestrator | ++ INTERACTIVE=false 2025-09-20 10:08:41.290848 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 10:08:41.290861 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 10:08:41.290876 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-20 10:08:41.292864 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:08:41.298431 | orchestrator | 2025-09-20 10:08:41.298528 | orchestrator | # CHECK 2025-09-20 10:08:41.298544 | orchestrator | 2025-09-20 10:08:41.298558 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:08:41.298576 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:08:41.298587 | orchestrator | + echo 2025-09-20 10:08:41.298598 | orchestrator | + echo '# CHECK' 2025-09-20 10:08:41.298644 | orchestrator | + echo 2025-09-20 10:08:41.298659 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 10:08:41.299185 | orchestrator | ++ semver latest 5.0.0 2025-09-20 10:08:41.366673 | orchestrator | 2025-09-20 10:08:41.366776 | orchestrator | ## Containers @ testbed-manager 2025-09-20 10:08:41.366793 | orchestrator | 2025-09-20 10:08:41.366817 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 10:08:41.366829 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:08:41.366840 | orchestrator | + echo 2025-09-20 10:08:41.366853 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-20 10:08:41.366865 | orchestrator | + echo 2025-09-20 10:08:41.366876 | orchestrator | + osism container testbed-manager ps 2025-09-20 10:08:43.750275 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 10:08:43.750428 | orchestrator | b877b5a92cbd registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-09-20 10:08:43.750450 | orchestrator | 0c3abd7648cb registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-09-20 10:08:43.750461 | orchestrator | 829778a84705 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-20 10:08:43.750478 | orchestrator | bda8bd8317cf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-20 10:08:43.750489 | orchestrator | 25a80e123608 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-09-20 10:08:43.750504 | orchestrator | 197ef05fd840 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2025-09-20 10:08:43.750515 | orchestrator | 2d01badcd894 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-20 10:08:43.750526 | orchestrator | 49d0fba83d60 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-20 10:08:43.750536 | orchestrator | e9f4a70249b3 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-20 10:08:43.750576 | orchestrator | d7e46370be57 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-09-20 10:08:43.750587 | orchestrator | f08121930f80 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-09-20 10:08:43.750598 | orchestrator | c373359fe8f2 registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-09-20 10:08:43.750608 | orchestrator | e21eb1fb071a registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 39 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-09-20 10:08:43.750647 | orchestrator | 4009b2b69de9 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 39 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-09-20 10:08:43.750657 | orchestrator | 5fa54a31763f registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 39 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-09-20 10:08:43.750687 | orchestrator | b1b15c83b3f6 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 39 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-09-20 10:08:43.750704 | orchestrator | dee7c5d2311a registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 55 minutes ago Up 54 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-20 10:08:43.750715 | orchestrator | 203d4de70859 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 59 minutes ago Up 38 minutes (healthy) osism-ansible 2025-09-20 10:08:43.750726 | orchestrator | 93565d8415fa registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 59 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-20 10:08:43.750736 | orchestrator | 38ece62b4b23 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-20 10:08:43.750746 | orchestrator | 4b6220552f5b registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-09-20 10:08:43.750757 | orchestrator | db7d361b00e7 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 59 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-20 10:08:43.750767 | orchestrator | 42029c5d290d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-09-20 10:08:43.750785 | orchestrator | 099c57506099 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 59 minutes ago Up 39 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-20 10:08:43.750795 | orchestrator | 9e2c83536e14 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-09-20 10:08:43.750805 | orchestrator | 129bc0f34437 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-09-20 10:08:43.750815 | orchestrator | b87beefa16ff registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 59 minutes ago Up 39 minutes (healthy) osismclient 2025-09-20 10:08:43.750824 | orchestrator | 4ba75a3e27b0 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 59 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-09-20 10:08:43.750835 | orchestrator | 493291291bb4 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-20 10:08:43.966711 | orchestrator | 2025-09-20 10:08:43.966808 | orchestrator | ## Images @ testbed-manager 2025-09-20 10:08:43.966822 | orchestrator | 2025-09-20 10:08:43.966831 | orchestrator | + echo 2025-09-20 10:08:43.966840 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-20 10:08:43.966849 | orchestrator | + echo 2025-09-20 10:08:43.966857 | orchestrator | + osism container testbed-manager images 2025-09-20 10:08:46.120057 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 10:08:46.357762 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 339294c00cfd 43 minutes ago 590MB 2025-09-20 10:08:46.357885 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 6dc4ea4637b3 43 minutes ago 543MB 2025-09-20 10:08:46.357926 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 90e80b1f4869 45 minutes ago 1.22GB 2025-09-20 10:08:46.357939 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 2d3a134f42e3 50 minutes ago 315MB 2025-09-20 10:08:46.357951 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 349210c49c4d About an hour ago 243MB 2025-09-20 10:08:46.357963 | orchestrator | registry.osism.tech/osism/homer v25.08.1 270470b58639 7 hours ago 11.5MB 2025-09-20 10:08:46.357975 | orchestrator | registry.osism.tech/osism/cephclient reef 6eb6307c0ae7 7 hours ago 453MB 2025-09-20 10:08:46.357987 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 9 hours ago 631MB 2025-09-20 10:08:46.357998 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 9 hours ago 748MB 2025-09-20 10:08:46.358066 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 9 hours ago 320MB 2025-09-20 10:08:46.358080 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 2b0169669244 9 hours ago 459MB 2025-09-20 10:08:46.358092 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 9 hours ago 360MB 2025-09-20 10:08:46.358104 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 1dcff67faba4 9 hours ago 363MB 2025-09-20 10:08:46.358115 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 3a6ccecdfa92 9 hours ago 894MB 2025-09-20 10:08:46.358149 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 9 hours ago 412MB 2025-09-20 10:08:46.358161 | orchestrator | registry.osism.tech/osism/osism-ansible latest 1f9d374e29e6 10 hours ago 594MB 2025-09-20 10:08:46.358172 | orchestrator | registry.osism.tech/osism/kolla-ansible a8bf06154a6a 10 hours ago 589MB 2025-09-20 10:08:46.358183 | orchestrator | registry.osism.tech/osism/ceph-ansible b1242be0232b 10 hours ago 543MB 2025-09-20 10:08:46.358194 | orchestrator | registry.osism.tech/osism/osism-kubernetes d9cadae386d2 10 hours ago 1.22GB 2025-09-20 10:08:46.358205 | orchestrator | registry.osism.tech/osism/osism latest f7431a16d155 10 hours ago 325MB 2025-09-20 10:08:46.358217 | orchestrator | registry.osism.tech/osism/osism-frontend latest a19e06f175f5 10 hours ago 236MB 2025-09-20 10:08:46.358228 | orchestrator | registry.osism.tech/osism/inventory-reconciler 3dfd8acf828c 10 hours ago 315MB 2025-09-20 10:08:46.358239 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 3 weeks ago 275MB 2025-09-20 10:08:46.358251 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 48f7ae354376 6 weeks ago 329MB 2025-09-20 10:08:46.358286 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 weeks ago 226MB 2025-09-20 10:08:46.358299 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-20 10:08:46.358311 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-20 10:08:46.358323 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-20 10:08:46.358357 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 10:08:46.358370 | orchestrator | ++ semver latest 5.0.0 2025-09-20 10:08:46.386352 | orchestrator | 2025-09-20 10:08:46.386478 | orchestrator | ## Containers @ testbed-node-0 2025-09-20 10:08:46.386504 | orchestrator | 2025-09-20 10:08:46.386523 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 10:08:46.386541 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:08:46.386561 | orchestrator | + echo 2025-09-20 10:08:46.386582 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-20 10:08:46.386601 | orchestrator | + echo 2025-09-20 10:08:46.386889 | orchestrator | + osism container testbed-node-0 ps 2025-09-20 10:08:48.516971 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 10:08:48.517054 | orchestrator | fd5eb94e04f3 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-20 10:08:48.517063 | orchestrator | 0bb7b6b10a6b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-20 10:08:48.517086 | orchestrator | 439ed4ef4068 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-20 10:08:48.517091 | orchestrator | 866b78294d55 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-09-20 10:08:48.517096 | orchestrator | 95c52b88a16b registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-20 10:08:48.517101 | orchestrator | 0a2376e126ea registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-20 10:08:48.517123 | orchestrator | f1ec68f26541 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-09-20 10:08:48.517128 | orchestrator | 91662005ec91 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-09-20 10:08:48.517132 | orchestrator | 171a1c4c8bb5 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-20 10:08:48.517137 | orchestrator | 0882b3769c5a registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-20 10:08:48.517141 | orchestrator | fc36c4b971ea registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-20 10:08:48.517146 | orchestrator | a988688f96f8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-09-20 10:08:48.517150 | orchestrator | 5012468c7d5b registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-09-20 10:08:48.517155 | orchestrator | 2a1920a49663 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) neutron_server 2025-09-20 10:08:48.517159 | orchestrator | 96913141cc82 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-09-20 10:08:48.517164 | orchestrator | 266cef821c9a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-09-20 10:08:48.517168 | orchestrator | d06c26188aec registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-09-20 10:08:48.517172 | orchestrator | 421952445de8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-09-20 10:08:48.517177 | orchestrator | a40d28bef7a8 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-09-20 10:08:48.517182 | orchestrator | a51b155ad70a registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-09-20 10:08:48.517186 | orchestrator | ca6673f20702 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-20 10:08:48.517205 | orchestrator | 5b6c2a267f10 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-20 10:08:48.517213 | orchestrator | f6495905598c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-20 10:08:48.517218 | orchestrator | e50ecb04fffe registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-20 10:08:48.517222 | orchestrator | 32f5ad829d86 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-20 10:08:48.517227 | orchestrator | 1c797605e6fe registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-20 10:08:48.517235 | orchestrator | 4f84e2b4be8c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-09-20 10:08:48.517240 | orchestrator | 148c404a5dac registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-20 10:08:48.517244 | orchestrator | 31d109ac9b6b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-20 10:08:48.517248 | orchestrator | aa51fd3896b5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-20 10:08:48.517253 | orchestrator | e30fa4e887fb registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-09-20 10:08:48.517257 | orchestrator | a88669b30c3f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-09-20 10:08:48.517261 | orchestrator | e278f6c1cd13 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-20 10:08:48.517266 | orchestrator | 5e01d44c92f7 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-20 10:08:48.517270 | orchestrator | 2ff740d373d0 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-20 10:08:48.517274 | orchestrator | 0b0fb8cd3627 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-20 10:08:48.517279 | orchestrator | b78bb00285ca registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-20 10:08:48.517283 | orchestrator | 7f22ed97b7a0 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-09-20 10:08:48.517287 | orchestrator | f2529f3cf580 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-09-20 10:08:48.517292 | orchestrator | d2828a3eb35f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-09-20 10:08:48.517296 | orchestrator | d877afc9d562 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-20 10:08:48.517300 | orchestrator | 594d189b19fd registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-20 10:08:48.517305 | orchestrator | 4f7cb12eaabf registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-20 10:08:48.517309 | orchestrator | 02ab885a41e7 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-20 10:08:48.517321 | orchestrator | 49b4aeabf6a7 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-09-20 10:08:48.517332 | orchestrator | f4dbea907917 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-09-20 10:08:48.517336 | orchestrator | 72d11574b2a0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-09-20 10:08:48.517341 | orchestrator | 5fb825bf4fde registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-20 10:08:48.517345 | orchestrator | 4e697f643942 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-20 10:08:48.517349 | orchestrator | 09fc665faa8b registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-20 10:08:48.517353 | orchestrator | d9aac41bad40 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-20 10:08:48.517358 | orchestrator | 9883810b55c2 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-20 10:08:48.517362 | orchestrator | 9dde65243b29 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-20 10:08:48.517366 | orchestrator | d6d36c8b2fe3 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-20 10:08:48.517371 | orchestrator | 7217069691e0 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-20 10:08:48.517375 | orchestrator | 46034748d433 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-20 10:08:48.517379 | orchestrator | 6001ea481581 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-20 10:08:48.736749 | orchestrator | 2025-09-20 10:08:48.736835 | orchestrator | ## Images @ testbed-node-0 2025-09-20 10:08:48.736843 | orchestrator | 2025-09-20 10:08:48.736849 | orchestrator | + echo 2025-09-20 10:08:48.736855 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-20 10:08:48.736860 | orchestrator | + echo 2025-09-20 10:08:48.736865 | orchestrator | + osism container testbed-node-0 images 2025-09-20 10:08:50.897407 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 10:08:50.897540 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a56e1a509897 7 hours ago 1.27GB 2025-09-20 10:08:50.897570 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 cbd748637569 9 hours ago 331MB 2025-09-20 10:08:50.897592 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d4294af2e892 9 hours ago 328MB 2025-09-20 10:08:50.897669 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 9 hours ago 631MB 2025-09-20 10:08:50.897695 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 9 hours ago 748MB 2025-09-20 10:08:50.897711 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c2d4086fa5d2 9 hours ago 321MB 2025-09-20 10:08:50.897729 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 062917222ddf 9 hours ago 1.59GB 2025-09-20 10:08:50.897746 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 807046badeab 9 hours ago 1.56GB 2025-09-20 10:08:50.897763 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 9 hours ago 320MB 2025-09-20 10:08:50.897812 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 c5be57c51f09 9 hours ago 1.05GB 2025-09-20 10:08:50.897829 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 279ed9090156 9 hours ago 420MB 2025-09-20 10:08:50.897845 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 74bf1faee51e 9 hours ago 377MB 2025-09-20 10:08:50.897862 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4c3412419f36 9 hours ago 327MB 2025-09-20 10:08:50.897879 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d07ecf7097cd 9 hours ago 327MB 2025-09-20 10:08:50.897895 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef300c8d9258 9 hours ago 1.21GB 2025-09-20 10:08:50.897912 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 a288348ddc28 9 hours ago 593MB 2025-09-20 10:08:50.897928 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 9 hours ago 360MB 2025-09-20 10:08:50.897944 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8fdc5556ff2a 9 hours ago 356MB 2025-09-20 10:08:50.897960 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a86fe1b5d5d 9 hours ago 353MB 2025-09-20 10:08:50.897977 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 28f5d6626265 9 hours ago 347MB 2025-09-20 10:08:50.897992 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 9 hours ago 412MB 2025-09-20 10:08:50.898007 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d534700a4020 9 hours ago 364MB 2025-09-20 10:08:50.898076 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 2b2c27a921cf 9 hours ago 364MB 2025-09-20 10:08:50.898095 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 bebccfdca14f 9 hours ago 1.2GB 2025-09-20 10:08:50.898112 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 151d45e415a2 9 hours ago 1.31GB 2025-09-20 10:08:50.898128 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 90b8a0e973d1 9 hours ago 1.16GB 2025-09-20 10:08:50.898144 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 55718ea9eeb0 9 hours ago 1.11GB 2025-09-20 10:08:50.898161 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 4e25fa6e32b9 9 hours ago 1.11GB 2025-09-20 10:08:50.898177 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2c881665cc69 9 hours ago 1.04GB 2025-09-20 10:08:50.898195 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 5aea062cd9c4 9 hours ago 1.04GB 2025-09-20 10:08:50.898213 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 59d53505cfd8 9 hours ago 1.04GB 2025-09-20 10:08:50.898230 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 a7617ff32f8f 9 hours ago 1.04GB 2025-09-20 10:08:50.898248 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 7dae8c7e17e4 9 hours ago 1.04GB 2025-09-20 10:08:50.898265 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 ee60c82cc9e1 9 hours ago 1.04GB 2025-09-20 10:08:50.898282 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 b2a85ccbb20a 9 hours ago 1.04GB 2025-09-20 10:08:50.898298 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4d300102afee 9 hours ago 1.41GB 2025-09-20 10:08:50.898314 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 baa1bc8f1e13 9 hours ago 1.41GB 2025-09-20 10:08:50.898357 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 b4f55138c4ad 9 hours ago 1.1GB 2025-09-20 10:08:50.898382 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 2e361f9a2585 9 hours ago 1.12GB 2025-09-20 10:08:50.898423 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9492daf1f100 9 hours ago 1.12GB 2025-09-20 10:08:50.898445 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c0b88995f1b5 9 hours ago 1.1GB 2025-09-20 10:08:50.898462 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0297c70d41f5 9 hours ago 1.1GB 2025-09-20 10:08:50.898478 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6566218469d4 9 hours ago 1.06GB 2025-09-20 10:08:50.898494 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6297bc16ad52 9 hours ago 1.06GB 2025-09-20 10:08:50.898519 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 daa88d238be7 9 hours ago 1.06GB 2025-09-20 10:08:50.898543 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 478801741ccd 9 hours ago 1.3GB 2025-09-20 10:08:50.898564 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 304be05688f9 9 hours ago 1.3GB 2025-09-20 10:08:50.898580 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 6334b378f681 9 hours ago 1.42GB 2025-09-20 10:08:50.898595 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 3e549beb306f 9 hours ago 1.3GB 2025-09-20 10:08:50.898612 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 0cd53433cbeb 9 hours ago 1.05GB 2025-09-20 10:08:50.898710 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b4cb5c883aaa 9 hours ago 1.05GB 2025-09-20 10:08:50.898736 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 782b8fe503f7 9 hours ago 1.05GB 2025-09-20 10:08:50.898753 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 91b868db9be1 9 hours ago 1.06GB 2025-09-20 10:08:50.898768 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 c57fe4414b86 9 hours ago 1.05GB 2025-09-20 10:08:50.898783 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 88cef4128b0e 9 hours ago 1.06GB 2025-09-20 10:08:50.898797 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 286a7939de9e 9 hours ago 1.15GB 2025-09-20 10:08:50.898812 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3ed757503e46 9 hours ago 1.25GB 2025-09-20 10:08:50.898826 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ce1472f5dd7d 9 hours ago 1.12GB 2025-09-20 10:08:50.898841 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 d202b0000858 9 hours ago 1.11GB 2025-09-20 10:08:50.898855 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 63d70c74c0aa 9 hours ago 949MB 2025-09-20 10:08:50.898871 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 548b04fa8ce4 9 hours ago 949MB 2025-09-20 10:08:50.898899 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b9e7968e9914 9 hours ago 949MB 2025-09-20 10:08:50.898916 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bd01fffe8a34 9 hours ago 949MB 2025-09-20 10:08:51.247883 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 10:08:51.248897 | orchestrator | ++ semver latest 5.0.0 2025-09-20 10:08:51.296353 | orchestrator | 2025-09-20 10:08:51.296429 | orchestrator | ## Containers @ testbed-node-1 2025-09-20 10:08:51.296438 | orchestrator | 2025-09-20 10:08:51.296444 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 10:08:51.296450 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:08:51.296457 | orchestrator | + echo 2025-09-20 10:08:51.296468 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-20 10:08:51.296478 | orchestrator | + echo 2025-09-20 10:08:51.296488 | orchestrator | + osism container testbed-node-1 ps 2025-09-20 10:08:53.698683 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 10:08:53.698835 | orchestrator | cc1fc69dac3d registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-20 10:08:53.698854 | orchestrator | cef452a148eb registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-20 10:08:53.698866 | orchestrator | e77b49abc55a registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-09-20 10:08:53.698877 | orchestrator | c9777e530535 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-09-20 10:08:53.698888 | orchestrator | 62ed15e0b7d3 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-20 10:08:53.698900 | orchestrator | e390d787a238 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-20 10:08:53.698911 | orchestrator | e61c56744f6b registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-20 10:08:53.698922 | orchestrator | 1c4ae656315c registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-20 10:08:53.698933 | orchestrator | 4ef8fe3103df registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-20 10:08:53.698950 | orchestrator | eae7b15de6d2 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-20 10:08:53.698961 | orchestrator | d276554aa69f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-20 10:08:53.698972 | orchestrator | e2c7e2bfd875 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-09-20 10:08:53.698983 | orchestrator | 246868de08b9 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-20 10:08:53.698994 | orchestrator | f6c660173ee7 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-09-20 10:08:53.699005 | orchestrator | 89d84d2aba91 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-09-20 10:08:53.699016 | orchestrator | 594453932cef registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-09-20 10:08:53.699028 | orchestrator | f131406cac05 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-09-20 10:08:53.699056 | orchestrator | 882f580682f8 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-09-20 10:08:53.699067 | orchestrator | 8dbf71dcb019 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-09-20 10:08:53.699079 | orchestrator | 0effb347c1f4 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-09-20 10:08:53.699098 | orchestrator | aa51dda50b07 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-20 10:08:53.699129 | orchestrator | 276775f87a3b registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-20 10:08:53.699141 | orchestrator | c3ed4f787238 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-20 10:08:53.699152 | orchestrator | c5798ff76e7d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-20 10:08:53.699164 | orchestrator | a50c50baca84 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-20 10:08:53.699175 | orchestrator | e5a45753b19b registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-20 10:08:53.699186 | orchestrator | a4814ae274be registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-09-20 10:08:53.699197 | orchestrator | 532e0ca38f0e registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-20 10:08:53.699208 | orchestrator | 3ef31c8ab4df registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-09-20 10:08:53.699219 | orchestrator | cba2845889b3 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-20 10:08:53.699229 | orchestrator | 6238ebf13879 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-20 10:08:53.699240 | orchestrator | b6b0adcef363 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-09-20 10:08:53.699251 | orchestrator | f669fc1aa476 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-20 10:08:53.699262 | orchestrator | 7c8b8436f305 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-20 10:08:53.699273 | orchestrator | 777a71962d83 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-20 10:08:53.699284 | orchestrator | 9023f1eed95c registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-20 10:08:53.699295 | orchestrator | 7c12a23f319e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-20 10:08:53.699306 | orchestrator | c7a4f57c5c78 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-20 10:08:53.699317 | orchestrator | 2289e6623fbd registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-20 10:08:53.699335 | orchestrator | 7dd4e3c9b219 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-09-20 10:08:53.699351 | orchestrator | 28ee15be5628 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-20 10:08:53.699363 | orchestrator | 5766c2b33c7a registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-20 10:08:53.699374 | orchestrator | a1bdf6a4dbf9 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-20 10:08:53.699385 | orchestrator | cab4f1b7eb0a registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-09-20 10:08:53.699402 | orchestrator | 7fdc3a44c77c registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-09-20 10:08:53.699414 | orchestrator | 1af09e19e2d1 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-09-20 10:08:53.699425 | orchestrator | 5c6d2c34ed3b registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-20 10:08:53.699436 | orchestrator | 11755dfe99fd registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-20 10:08:53.699447 | orchestrator | 38d4be8c9418 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-09-20 10:08:53.699458 | orchestrator | b36ff817dfaa registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-20 10:08:53.699469 | orchestrator | 87122d685f90 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-20 10:08:53.699480 | orchestrator | 6c8c3ed9c3c8 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-20 10:08:53.699491 | orchestrator | f461f8230531 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-20 10:08:53.699502 | orchestrator | 0b76cdf95bb0 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-20 10:08:53.699513 | orchestrator | 0c14b8061b9c registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-20 10:08:53.699524 | orchestrator | 3f82114922d4 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-20 10:08:53.699535 | orchestrator | fd9fa887312d registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-20 10:08:54.048645 | orchestrator | 2025-09-20 10:08:54.048739 | orchestrator | ## Images @ testbed-node-1 2025-09-20 10:08:54.048752 | orchestrator | 2025-09-20 10:08:54.048763 | orchestrator | + echo 2025-09-20 10:08:54.048774 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-20 10:08:54.048784 | orchestrator | + echo 2025-09-20 10:08:54.048794 | orchestrator | + osism container testbed-node-1 images 2025-09-20 10:08:56.528513 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 10:08:56.528655 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a56e1a509897 7 hours ago 1.27GB 2025-09-20 10:08:56.528669 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 cbd748637569 9 hours ago 331MB 2025-09-20 10:08:56.528679 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d4294af2e892 9 hours ago 328MB 2025-09-20 10:08:56.528688 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 9 hours ago 631MB 2025-09-20 10:08:56.528697 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 9 hours ago 748MB 2025-09-20 10:08:56.528706 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c2d4086fa5d2 9 hours ago 321MB 2025-09-20 10:08:56.528715 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 062917222ddf 9 hours ago 1.59GB 2025-09-20 10:08:56.528723 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 807046badeab 9 hours ago 1.56GB 2025-09-20 10:08:56.528732 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 9 hours ago 320MB 2025-09-20 10:08:56.528740 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 c5be57c51f09 9 hours ago 1.05GB 2025-09-20 10:08:56.528749 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 74bf1faee51e 9 hours ago 377MB 2025-09-20 10:08:56.528758 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 279ed9090156 9 hours ago 420MB 2025-09-20 10:08:56.528766 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4c3412419f36 9 hours ago 327MB 2025-09-20 10:08:56.528775 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d07ecf7097cd 9 hours ago 327MB 2025-09-20 10:08:56.528784 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef300c8d9258 9 hours ago 1.21GB 2025-09-20 10:08:56.528809 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 a288348ddc28 9 hours ago 593MB 2025-09-20 10:08:56.528822 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 9 hours ago 360MB 2025-09-20 10:08:56.528831 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8fdc5556ff2a 9 hours ago 356MB 2025-09-20 10:08:56.528840 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a86fe1b5d5d 9 hours ago 353MB 2025-09-20 10:08:56.528848 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 28f5d6626265 9 hours ago 347MB 2025-09-20 10:08:56.528857 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 9 hours ago 412MB 2025-09-20 10:08:56.528866 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d534700a4020 9 hours ago 364MB 2025-09-20 10:08:56.528874 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 2b2c27a921cf 9 hours ago 364MB 2025-09-20 10:08:56.528883 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 bebccfdca14f 9 hours ago 1.2GB 2025-09-20 10:08:56.528892 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 151d45e415a2 9 hours ago 1.31GB 2025-09-20 10:08:56.528900 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 90b8a0e973d1 9 hours ago 1.16GB 2025-09-20 10:08:56.528909 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 55718ea9eeb0 9 hours ago 1.11GB 2025-09-20 10:08:56.528918 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 4e25fa6e32b9 9 hours ago 1.11GB 2025-09-20 10:08:56.528926 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2c881665cc69 9 hours ago 1.04GB 2025-09-20 10:08:56.528935 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4d300102afee 9 hours ago 1.41GB 2025-09-20 10:08:56.528951 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 baa1bc8f1e13 9 hours ago 1.41GB 2025-09-20 10:08:56.528960 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 b4f55138c4ad 9 hours ago 1.1GB 2025-09-20 10:08:56.528969 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 2e361f9a2585 9 hours ago 1.12GB 2025-09-20 10:08:56.528978 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9492daf1f100 9 hours ago 1.12GB 2025-09-20 10:08:56.528987 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c0b88995f1b5 9 hours ago 1.1GB 2025-09-20 10:08:56.528996 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0297c70d41f5 9 hours ago 1.1GB 2025-09-20 10:08:56.529005 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6566218469d4 9 hours ago 1.06GB 2025-09-20 10:08:56.529027 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6297bc16ad52 9 hours ago 1.06GB 2025-09-20 10:08:56.529037 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 daa88d238be7 9 hours ago 1.06GB 2025-09-20 10:08:56.529046 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 478801741ccd 9 hours ago 1.3GB 2025-09-20 10:08:56.529054 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 304be05688f9 9 hours ago 1.3GB 2025-09-20 10:08:56.529062 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 6334b378f681 9 hours ago 1.42GB 2025-09-20 10:08:56.529071 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 3e549beb306f 9 hours ago 1.3GB 2025-09-20 10:08:56.529080 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 0cd53433cbeb 9 hours ago 1.05GB 2025-09-20 10:08:56.529088 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b4cb5c883aaa 9 hours ago 1.05GB 2025-09-20 10:08:56.529097 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 782b8fe503f7 9 hours ago 1.05GB 2025-09-20 10:08:56.529105 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 91b868db9be1 9 hours ago 1.06GB 2025-09-20 10:08:56.529114 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 c57fe4414b86 9 hours ago 1.05GB 2025-09-20 10:08:56.529122 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 88cef4128b0e 9 hours ago 1.06GB 2025-09-20 10:08:56.529131 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 286a7939de9e 9 hours ago 1.15GB 2025-09-20 10:08:56.529139 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3ed757503e46 9 hours ago 1.25GB 2025-09-20 10:08:56.529148 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 63d70c74c0aa 9 hours ago 949MB 2025-09-20 10:08:56.529156 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 548b04fa8ce4 9 hours ago 949MB 2025-09-20 10:08:56.529165 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b9e7968e9914 9 hours ago 949MB 2025-09-20 10:08:56.529174 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bd01fffe8a34 9 hours ago 949MB 2025-09-20 10:08:56.877291 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-20 10:08:56.878306 | orchestrator | ++ semver latest 5.0.0 2025-09-20 10:08:56.938428 | orchestrator | 2025-09-20 10:08:56.938494 | orchestrator | ## Containers @ testbed-node-2 2025-09-20 10:08:56.938501 | orchestrator | 2025-09-20 10:08:56.938506 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 10:08:56.938510 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:08:56.938514 | orchestrator | + echo 2025-09-20 10:08:56.938519 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-20 10:08:56.938524 | orchestrator | + echo 2025-09-20 10:08:56.938528 | orchestrator | + osism container testbed-node-2 ps 2025-09-20 10:08:59.403833 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-20 10:08:59.403939 | orchestrator | 0694e14b8f34 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-20 10:08:59.403956 | orchestrator | 1f0cb8bdb605 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-20 10:08:59.403968 | orchestrator | 09102bfcfcef registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-09-20 10:08:59.403999 | orchestrator | 677d353331ee registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-09-20 10:08:59.404011 | orchestrator | 5a3cdd0662e9 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-20 10:08:59.404022 | orchestrator | 7eab23e9e8ae registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-20 10:08:59.404033 | orchestrator | 5374366be4b9 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-20 10:08:59.404045 | orchestrator | 7425e1334a98 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) magnum_api 2025-09-20 10:08:59.404074 | orchestrator | 1776e436cf8a registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-20 10:08:59.404085 | orchestrator | e352c77fa408 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-09-20 10:08:59.404108 | orchestrator | c76aba4eaecb registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-20 10:08:59.404120 | orchestrator | 95de055ba7e1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-09-20 10:08:59.404131 | orchestrator | 75d1351dd03f registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-20 10:08:59.404141 | orchestrator | cde570b74db0 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_mdns 2025-09-20 10:08:59.404152 | orchestrator | 37e608047e47 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_producer 2025-09-20 10:08:59.404164 | orchestrator | c2170a7552ae registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_central 2025-09-20 10:08:59.404175 | orchestrator | dbe17370bd63 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-09-20 10:08:59.404185 | orchestrator | dbd126f418ef registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-09-20 10:08:59.404196 | orchestrator | a55007faf81e registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-09-20 10:08:59.404228 | orchestrator | e0568104b1ee registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-09-20 10:08:59.404240 | orchestrator | 4baf2fef7f41 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-09-20 10:08:59.404269 | orchestrator | 3e12b2820e7c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-20 10:08:59.404281 | orchestrator | 101d5182f43c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-20 10:08:59.404293 | orchestrator | a319f89b60f8 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-20 10:08:59.404304 | orchestrator | a7c01ada8129 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-09-20 10:08:59.404315 | orchestrator | d4cb8a9bfdb4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-20 10:08:59.404326 | orchestrator | 1598007cc980 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-09-20 10:08:59.404337 | orchestrator | 819ac95657bb registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-20 10:08:59.404348 | orchestrator | 4352e02bf18a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-09-20 10:08:59.404359 | orchestrator | 7c0015fe8411 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-09-20 10:08:59.404372 | orchestrator | 403ee320c2e2 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-09-20 10:08:59.404384 | orchestrator | 93c89c176dd1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-09-20 10:08:59.404397 | orchestrator | c2e99fea201a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-20 10:08:59.404410 | orchestrator | 5922aafc40cb registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-20 10:08:59.404422 | orchestrator | 9ecae5c523fe registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-20 10:08:59.404435 | orchestrator | bc8122aa8dea registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-20 10:08:59.404454 | orchestrator | 0a40aaeb71dd registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-20 10:08:59.404467 | orchestrator | 983d5701d176 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-20 10:08:59.404486 | orchestrator | 182402e66cf8 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-20 10:08:59.404499 | orchestrator | c9f4bbc199c4 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-09-20 10:08:59.404511 | orchestrator | 2a44445d33eb registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-20 10:08:59.404523 | orchestrator | e741b24d822e registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-20 10:08:59.404541 | orchestrator | 51ec9a697364 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-20 10:08:59.404554 | orchestrator | 1e2281dd6ed7 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-09-20 10:08:59.404574 | orchestrator | 92be974c6fd9 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-09-20 10:08:59.404588 | orchestrator | c3a9ca846f7b registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-09-20 10:08:59.404601 | orchestrator | 9f8058db775d registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-20 10:08:59.404613 | orchestrator | 5056c8dbf462 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-20 10:08:59.404627 | orchestrator | ed35346005b9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-09-20 10:08:59.404661 | orchestrator | fd54d0ef470e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-20 10:08:59.404674 | orchestrator | a9cdac890c2d registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-20 10:08:59.404686 | orchestrator | ce56f7c01c94 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-20 10:08:59.404699 | orchestrator | 1315fbb2c1c2 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-20 10:08:59.404711 | orchestrator | 89fb43b26c78 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) memcached 2025-09-20 10:08:59.404724 | orchestrator | 76657d67c697 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-20 10:08:59.404735 | orchestrator | 8a49aa6ffc9e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-20 10:08:59.404746 | orchestrator | fa04867d09b4 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-20 10:08:59.730551 | orchestrator | 2025-09-20 10:08:59.730627 | orchestrator | ## Images @ testbed-node-2 2025-09-20 10:08:59.730672 | orchestrator | 2025-09-20 10:08:59.730681 | orchestrator | + echo 2025-09-20 10:08:59.730689 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-20 10:08:59.730722 | orchestrator | + echo 2025-09-20 10:08:59.730730 | orchestrator | + osism container testbed-node-2 images 2025-09-20 10:09:02.149478 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-20 10:09:02.149612 | orchestrator | registry.osism.tech/osism/ceph-daemon reef a56e1a509897 7 hours ago 1.27GB 2025-09-20 10:09:02.149667 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 cbd748637569 9 hours ago 331MB 2025-09-20 10:09:02.149688 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d4294af2e892 9 hours ago 328MB 2025-09-20 10:09:02.149706 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 b44872d5d32b 9 hours ago 631MB 2025-09-20 10:09:02.149724 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 8ce9d589a849 9 hours ago 748MB 2025-09-20 10:09:02.149741 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 c2d4086fa5d2 9 hours ago 321MB 2025-09-20 10:09:02.149759 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 062917222ddf 9 hours ago 1.59GB 2025-09-20 10:09:02.149777 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 807046badeab 9 hours ago 1.56GB 2025-09-20 10:09:02.149796 | orchestrator | registry.osism.tech/kolla/cron 2024.2 fdb1ac7fd2c0 9 hours ago 320MB 2025-09-20 10:09:02.149813 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 c5be57c51f09 9 hours ago 1.05GB 2025-09-20 10:09:02.149831 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 279ed9090156 9 hours ago 420MB 2025-09-20 10:09:02.149850 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 74bf1faee51e 9 hours ago 377MB 2025-09-20 10:09:02.149869 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4c3412419f36 9 hours ago 327MB 2025-09-20 10:09:02.149886 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 d07ecf7097cd 9 hours ago 327MB 2025-09-20 10:09:02.149905 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 ef300c8d9258 9 hours ago 1.21GB 2025-09-20 10:09:02.149923 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 a288348ddc28 9 hours ago 593MB 2025-09-20 10:09:02.149940 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 8fdc5556ff2a 9 hours ago 356MB 2025-09-20 10:09:02.149958 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 67239c483b22 9 hours ago 360MB 2025-09-20 10:09:02.149976 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 4a86fe1b5d5d 9 hours ago 353MB 2025-09-20 10:09:02.149992 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 28f5d6626265 9 hours ago 347MB 2025-09-20 10:09:02.150007 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 dad7dbe29e33 9 hours ago 412MB 2025-09-20 10:09:02.150069 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 d534700a4020 9 hours ago 364MB 2025-09-20 10:09:02.150088 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 2b2c27a921cf 9 hours ago 364MB 2025-09-20 10:09:02.150108 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 bebccfdca14f 9 hours ago 1.2GB 2025-09-20 10:09:02.150126 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 151d45e415a2 9 hours ago 1.31GB 2025-09-20 10:09:02.150145 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 90b8a0e973d1 9 hours ago 1.16GB 2025-09-20 10:09:02.150164 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 55718ea9eeb0 9 hours ago 1.11GB 2025-09-20 10:09:02.150207 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 4e25fa6e32b9 9 hours ago 1.11GB 2025-09-20 10:09:02.150259 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 2c881665cc69 9 hours ago 1.04GB 2025-09-20 10:09:02.150280 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4d300102afee 9 hours ago 1.41GB 2025-09-20 10:09:02.150297 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 baa1bc8f1e13 9 hours ago 1.41GB 2025-09-20 10:09:02.150315 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 b4f55138c4ad 9 hours ago 1.1GB 2025-09-20 10:09:02.150332 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 2e361f9a2585 9 hours ago 1.12GB 2025-09-20 10:09:02.150350 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 9492daf1f100 9 hours ago 1.12GB 2025-09-20 10:09:02.150367 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 c0b88995f1b5 9 hours ago 1.1GB 2025-09-20 10:09:02.150385 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 0297c70d41f5 9 hours ago 1.1GB 2025-09-20 10:09:02.150403 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 6566218469d4 9 hours ago 1.06GB 2025-09-20 10:09:02.150446 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6297bc16ad52 9 hours ago 1.06GB 2025-09-20 10:09:02.150467 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 daa88d238be7 9 hours ago 1.06GB 2025-09-20 10:09:02.150486 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 478801741ccd 9 hours ago 1.3GB 2025-09-20 10:09:02.150506 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 304be05688f9 9 hours ago 1.3GB 2025-09-20 10:09:02.150525 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 6334b378f681 9 hours ago 1.42GB 2025-09-20 10:09:02.150543 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 3e549beb306f 9 hours ago 1.3GB 2025-09-20 10:09:02.150562 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 0cd53433cbeb 9 hours ago 1.05GB 2025-09-20 10:09:02.150581 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 b4cb5c883aaa 9 hours ago 1.05GB 2025-09-20 10:09:02.150600 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 782b8fe503f7 9 hours ago 1.05GB 2025-09-20 10:09:02.150617 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 91b868db9be1 9 hours ago 1.06GB 2025-09-20 10:09:02.150737 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 c57fe4414b86 9 hours ago 1.05GB 2025-09-20 10:09:02.150760 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 88cef4128b0e 9 hours ago 1.06GB 2025-09-20 10:09:02.150779 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 286a7939de9e 9 hours ago 1.15GB 2025-09-20 10:09:02.150806 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3ed757503e46 9 hours ago 1.25GB 2025-09-20 10:09:02.150825 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 63d70c74c0aa 9 hours ago 949MB 2025-09-20 10:09:02.150843 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 548b04fa8ce4 9 hours ago 949MB 2025-09-20 10:09:02.150861 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b9e7968e9914 9 hours ago 949MB 2025-09-20 10:09:02.150879 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 bd01fffe8a34 9 hours ago 949MB 2025-09-20 10:09:02.510408 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-20 10:09:02.519518 | orchestrator | + set -e 2025-09-20 10:09:02.519574 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 10:09:02.521595 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 10:09:02.521701 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 10:09:02.521728 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 10:09:02.521749 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 10:09:02.521769 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 10:09:02.521824 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 10:09:02.521836 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:09:02.521847 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:09:02.521858 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 10:09:02.521874 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 10:09:02.521885 | orchestrator | ++ export ARA=false 2025-09-20 10:09:02.521896 | orchestrator | ++ ARA=false 2025-09-20 10:09:02.521907 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 10:09:02.521918 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 10:09:02.521928 | orchestrator | ++ export TEMPEST=false 2025-09-20 10:09:02.521939 | orchestrator | ++ TEMPEST=false 2025-09-20 10:09:02.521949 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 10:09:02.521960 | orchestrator | ++ IS_ZUUL=true 2025-09-20 10:09:02.521971 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 10:09:02.521982 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 10:09:02.521992 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 10:09:02.522003 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 10:09:02.522059 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 10:09:02.522074 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 10:09:02.522086 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 10:09:02.522097 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 10:09:02.522362 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 10:09:02.522380 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 10:09:02.522393 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-20 10:09:02.522406 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-20 10:09:02.528983 | orchestrator | + set -e 2025-09-20 10:09:02.529037 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 10:09:02.529049 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 10:09:02.529060 | orchestrator | ++ INTERACTIVE=false 2025-09-20 10:09:02.529070 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 10:09:02.529081 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 10:09:02.529092 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-20 10:09:02.529875 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:09:02.536114 | orchestrator | 2025-09-20 10:09:02.536157 | orchestrator | # Ceph status 2025-09-20 10:09:02.536169 | orchestrator | 2025-09-20 10:09:02.536181 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:09:02.536192 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:09:02.536203 | orchestrator | + echo 2025-09-20 10:09:02.536213 | orchestrator | + echo '# Ceph status' 2025-09-20 10:09:02.536224 | orchestrator | + echo 2025-09-20 10:09:02.536235 | orchestrator | + ceph -s 2025-09-20 10:09:03.134623 | orchestrator | cluster: 2025-09-20 10:09:03.134780 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-20 10:09:03.135675 | orchestrator | health: HEALTH_OK 2025-09-20 10:09:03.135708 | orchestrator | 2025-09-20 10:09:03.135721 | orchestrator | services: 2025-09-20 10:09:03.135732 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-09-20 10:09:03.135745 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-09-20 10:09:03.135757 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-20 10:09:03.135768 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-09-20 10:09:03.135780 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-20 10:09:03.135791 | orchestrator | 2025-09-20 10:09:03.135802 | orchestrator | data: 2025-09-20 10:09:03.135813 | orchestrator | volumes: 1/1 healthy 2025-09-20 10:09:03.135824 | orchestrator | pools: 14 pools, 401 pgs 2025-09-20 10:09:03.135835 | orchestrator | objects: 524 objects, 2.2 GiB 2025-09-20 10:09:03.135846 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-20 10:09:03.135857 | orchestrator | pgs: 401 active+clean 2025-09-20 10:09:03.135868 | orchestrator | 2025-09-20 10:09:03.185707 | orchestrator | 2025-09-20 10:09:03.185823 | orchestrator | # Ceph versions 2025-09-20 10:09:03.185839 | orchestrator | 2025-09-20 10:09:03.185851 | orchestrator | + echo 2025-09-20 10:09:03.185863 | orchestrator | + echo '# Ceph versions' 2025-09-20 10:09:03.185875 | orchestrator | + echo 2025-09-20 10:09:03.185886 | orchestrator | + ceph versions 2025-09-20 10:09:03.798136 | orchestrator | { 2025-09-20 10:09:03.798254 | orchestrator | "mon": { 2025-09-20 10:09:03.798280 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 10:09:03.798300 | orchestrator | }, 2025-09-20 10:09:03.798318 | orchestrator | "mgr": { 2025-09-20 10:09:03.798373 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 10:09:03.798393 | orchestrator | }, 2025-09-20 10:09:03.798410 | orchestrator | "osd": { 2025-09-20 10:09:03.798426 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-20 10:09:03.798437 | orchestrator | }, 2025-09-20 10:09:03.798446 | orchestrator | "mds": { 2025-09-20 10:09:03.798457 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 10:09:03.798466 | orchestrator | }, 2025-09-20 10:09:03.798476 | orchestrator | "rgw": { 2025-09-20 10:09:03.798486 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-20 10:09:03.798495 | orchestrator | }, 2025-09-20 10:09:03.798505 | orchestrator | "overall": { 2025-09-20 10:09:03.798515 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-20 10:09:03.798525 | orchestrator | } 2025-09-20 10:09:03.798535 | orchestrator | } 2025-09-20 10:09:03.848146 | orchestrator | 2025-09-20 10:09:03.848239 | orchestrator | # Ceph OSD tree 2025-09-20 10:09:03.848251 | orchestrator | 2025-09-20 10:09:03.848264 | orchestrator | + echo 2025-09-20 10:09:03.848276 | orchestrator | + echo '# Ceph OSD tree' 2025-09-20 10:09:03.848288 | orchestrator | + echo 2025-09-20 10:09:03.848299 | orchestrator | + ceph osd df tree 2025-09-20 10:09:04.377891 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-20 10:09:04.378072 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-09-20 10:09:04.378094 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-20 10:09:04.378108 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.96 1.18 200 up osd.0 2025-09-20 10:09:04.378119 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 996 MiB 923 MiB 1 KiB 74 MiB 19 GiB 4.87 0.82 190 up osd.4 2025-09-20 10:09:04.378130 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-20 10:09:04.378147 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.93 1.17 205 up osd.2 2025-09-20 10:09:04.378167 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1004 MiB 931 MiB 1 KiB 74 MiB 19 GiB 4.91 0.83 187 up osd.5 2025-09-20 10:09:04.378185 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-20 10:09:04.378203 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.26 1.23 184 up osd.1 2025-09-20 10:09:04.378223 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 937 MiB 867 MiB 1 KiB 70 MiB 19 GiB 4.58 0.77 204 up osd.3 2025-09-20 10:09:04.378242 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-09-20 10:09:04.378262 | orchestrator | MIN/MAX VAR: 0.77/1.23 STDDEV: 1.14 2025-09-20 10:09:04.421511 | orchestrator | 2025-09-20 10:09:04.421615 | orchestrator | # Ceph monitor status 2025-09-20 10:09:04.421631 | orchestrator | 2025-09-20 10:09:04.421683 | orchestrator | + echo 2025-09-20 10:09:04.421695 | orchestrator | + echo '# Ceph monitor status' 2025-09-20 10:09:04.421707 | orchestrator | + echo 2025-09-20 10:09:04.421718 | orchestrator | + ceph mon stat 2025-09-20 10:09:05.077851 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-20 10:09:05.132004 | orchestrator | 2025-09-20 10:09:05.132111 | orchestrator | # Ceph quorum status 2025-09-20 10:09:05.132133 | orchestrator | 2025-09-20 10:09:05.132161 | orchestrator | + echo 2025-09-20 10:09:05.132225 | orchestrator | + echo '# Ceph quorum status' 2025-09-20 10:09:05.132248 | orchestrator | + echo 2025-09-20 10:09:05.132890 | orchestrator | + jq 2025-09-20 10:09:05.132981 | orchestrator | + ceph quorum_status 2025-09-20 10:09:05.772146 | orchestrator | { 2025-09-20 10:09:05.772277 | orchestrator | "election_epoch": 8, 2025-09-20 10:09:05.772293 | orchestrator | "quorum": [ 2025-09-20 10:09:05.772306 | orchestrator | 0, 2025-09-20 10:09:05.772318 | orchestrator | 1, 2025-09-20 10:09:05.772328 | orchestrator | 2 2025-09-20 10:09:05.772339 | orchestrator | ], 2025-09-20 10:09:05.772350 | orchestrator | "quorum_names": [ 2025-09-20 10:09:05.772361 | orchestrator | "testbed-node-0", 2025-09-20 10:09:05.772372 | orchestrator | "testbed-node-1", 2025-09-20 10:09:05.772382 | orchestrator | "testbed-node-2" 2025-09-20 10:09:05.772393 | orchestrator | ], 2025-09-20 10:09:05.772404 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-20 10:09:05.772416 | orchestrator | "quorum_age": 1693, 2025-09-20 10:09:05.772427 | orchestrator | "features": { 2025-09-20 10:09:05.772438 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-20 10:09:05.772449 | orchestrator | "quorum_mon": [ 2025-09-20 10:09:05.772460 | orchestrator | "kraken", 2025-09-20 10:09:05.772470 | orchestrator | "luminous", 2025-09-20 10:09:05.772482 | orchestrator | "mimic", 2025-09-20 10:09:05.772493 | orchestrator | "osdmap-prune", 2025-09-20 10:09:05.772504 | orchestrator | "nautilus", 2025-09-20 10:09:05.772515 | orchestrator | "octopus", 2025-09-20 10:09:05.772525 | orchestrator | "pacific", 2025-09-20 10:09:05.772536 | orchestrator | "elector-pinging", 2025-09-20 10:09:05.772547 | orchestrator | "quincy", 2025-09-20 10:09:05.772558 | orchestrator | "reef" 2025-09-20 10:09:05.772569 | orchestrator | ] 2025-09-20 10:09:05.772580 | orchestrator | }, 2025-09-20 10:09:05.772591 | orchestrator | "monmap": { 2025-09-20 10:09:05.772602 | orchestrator | "epoch": 1, 2025-09-20 10:09:05.772613 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-20 10:09:05.772624 | orchestrator | "modified": "2025-09-20T09:40:34.633928Z", 2025-09-20 10:09:05.772678 | orchestrator | "created": "2025-09-20T09:40:34.633928Z", 2025-09-20 10:09:05.772694 | orchestrator | "min_mon_release": 18, 2025-09-20 10:09:05.772707 | orchestrator | "min_mon_release_name": "reef", 2025-09-20 10:09:05.772720 | orchestrator | "election_strategy": 1, 2025-09-20 10:09:05.772734 | orchestrator | "disallowed_leaders: ": "", 2025-09-20 10:09:05.772747 | orchestrator | "stretch_mode": false, 2025-09-20 10:09:05.772760 | orchestrator | "tiebreaker_mon": "", 2025-09-20 10:09:05.772772 | orchestrator | "removed_ranks: ": "", 2025-09-20 10:09:05.772785 | orchestrator | "features": { 2025-09-20 10:09:05.772798 | orchestrator | "persistent": [ 2025-09-20 10:09:05.772811 | orchestrator | "kraken", 2025-09-20 10:09:05.772823 | orchestrator | "luminous", 2025-09-20 10:09:05.772836 | orchestrator | "mimic", 2025-09-20 10:09:05.772848 | orchestrator | "osdmap-prune", 2025-09-20 10:09:05.772861 | orchestrator | "nautilus", 2025-09-20 10:09:05.772873 | orchestrator | "octopus", 2025-09-20 10:09:05.772886 | orchestrator | "pacific", 2025-09-20 10:09:05.772898 | orchestrator | "elector-pinging", 2025-09-20 10:09:05.772911 | orchestrator | "quincy", 2025-09-20 10:09:05.772923 | orchestrator | "reef" 2025-09-20 10:09:05.772936 | orchestrator | ], 2025-09-20 10:09:05.772948 | orchestrator | "optional": [] 2025-09-20 10:09:05.772961 | orchestrator | }, 2025-09-20 10:09:05.772974 | orchestrator | "mons": [ 2025-09-20 10:09:05.772987 | orchestrator | { 2025-09-20 10:09:05.773000 | orchestrator | "rank": 0, 2025-09-20 10:09:05.773013 | orchestrator | "name": "testbed-node-0", 2025-09-20 10:09:05.773026 | orchestrator | "public_addrs": { 2025-09-20 10:09:05.773040 | orchestrator | "addrvec": [ 2025-09-20 10:09:05.773052 | orchestrator | { 2025-09-20 10:09:05.773063 | orchestrator | "type": "v2", 2025-09-20 10:09:05.773074 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-20 10:09:05.773085 | orchestrator | "nonce": 0 2025-09-20 10:09:05.773101 | orchestrator | }, 2025-09-20 10:09:05.773118 | orchestrator | { 2025-09-20 10:09:05.773129 | orchestrator | "type": "v1", 2025-09-20 10:09:05.773139 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-20 10:09:05.773150 | orchestrator | "nonce": 0 2025-09-20 10:09:05.773161 | orchestrator | } 2025-09-20 10:09:05.773172 | orchestrator | ] 2025-09-20 10:09:05.773183 | orchestrator | }, 2025-09-20 10:09:05.773194 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-20 10:09:05.773205 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-20 10:09:05.773215 | orchestrator | "priority": 0, 2025-09-20 10:09:05.773226 | orchestrator | "weight": 0, 2025-09-20 10:09:05.773268 | orchestrator | "crush_location": "{}" 2025-09-20 10:09:05.773280 | orchestrator | }, 2025-09-20 10:09:05.773291 | orchestrator | { 2025-09-20 10:09:05.773302 | orchestrator | "rank": 1, 2025-09-20 10:09:05.773313 | orchestrator | "name": "testbed-node-1", 2025-09-20 10:09:05.773323 | orchestrator | "public_addrs": { 2025-09-20 10:09:05.773334 | orchestrator | "addrvec": [ 2025-09-20 10:09:05.773345 | orchestrator | { 2025-09-20 10:09:05.773356 | orchestrator | "type": "v2", 2025-09-20 10:09:05.773367 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-20 10:09:05.773378 | orchestrator | "nonce": 0 2025-09-20 10:09:05.773388 | orchestrator | }, 2025-09-20 10:09:05.773399 | orchestrator | { 2025-09-20 10:09:05.773410 | orchestrator | "type": "v1", 2025-09-20 10:09:05.773421 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-20 10:09:05.773432 | orchestrator | "nonce": 0 2025-09-20 10:09:05.773443 | orchestrator | } 2025-09-20 10:09:05.773454 | orchestrator | ] 2025-09-20 10:09:05.773465 | orchestrator | }, 2025-09-20 10:09:05.773475 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-20 10:09:05.773486 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-20 10:09:05.773497 | orchestrator | "priority": 0, 2025-09-20 10:09:05.773508 | orchestrator | "weight": 0, 2025-09-20 10:09:05.773519 | orchestrator | "crush_location": "{}" 2025-09-20 10:09:05.773530 | orchestrator | }, 2025-09-20 10:09:05.773541 | orchestrator | { 2025-09-20 10:09:05.773552 | orchestrator | "rank": 2, 2025-09-20 10:09:05.773563 | orchestrator | "name": "testbed-node-2", 2025-09-20 10:09:05.773574 | orchestrator | "public_addrs": { 2025-09-20 10:09:05.773585 | orchestrator | "addrvec": [ 2025-09-20 10:09:05.773596 | orchestrator | { 2025-09-20 10:09:05.773606 | orchestrator | "type": "v2", 2025-09-20 10:09:05.773617 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-20 10:09:05.773628 | orchestrator | "nonce": 0 2025-09-20 10:09:05.773665 | orchestrator | }, 2025-09-20 10:09:05.773676 | orchestrator | { 2025-09-20 10:09:05.773687 | orchestrator | "type": "v1", 2025-09-20 10:09:05.773698 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-20 10:09:05.773709 | orchestrator | "nonce": 0 2025-09-20 10:09:05.773720 | orchestrator | } 2025-09-20 10:09:05.773731 | orchestrator | ] 2025-09-20 10:09:05.773742 | orchestrator | }, 2025-09-20 10:09:05.773753 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-20 10:09:05.773781 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-20 10:09:05.773792 | orchestrator | "priority": 0, 2025-09-20 10:09:05.773803 | orchestrator | "weight": 0, 2025-09-20 10:09:05.773814 | orchestrator | "crush_location": "{}" 2025-09-20 10:09:05.773824 | orchestrator | } 2025-09-20 10:09:05.773836 | orchestrator | ] 2025-09-20 10:09:05.773846 | orchestrator | } 2025-09-20 10:09:05.773857 | orchestrator | } 2025-09-20 10:09:05.773881 | orchestrator | 2025-09-20 10:09:05.773893 | orchestrator | # Ceph free space status 2025-09-20 10:09:05.773904 | orchestrator | 2025-09-20 10:09:05.773915 | orchestrator | + echo 2025-09-20 10:09:05.773926 | orchestrator | + echo '# Ceph free space status' 2025-09-20 10:09:05.773937 | orchestrator | + echo 2025-09-20 10:09:05.773948 | orchestrator | + ceph df 2025-09-20 10:09:06.353771 | orchestrator | --- RAW STORAGE --- 2025-09-20 10:09:06.353895 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-20 10:09:06.353934 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-20 10:09:06.353946 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-20 10:09:06.353958 | orchestrator | 2025-09-20 10:09:06.353970 | orchestrator | --- POOLS --- 2025-09-20 10:09:06.353982 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-20 10:09:06.353995 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-20 10:09:06.354007 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-20 10:09:06.354072 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-20 10:09:06.354087 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-20 10:09:06.354113 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-20 10:09:06.354134 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-20 10:09:06.354149 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-20 10:09:06.354216 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-20 10:09:06.354240 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-09-20 10:09:06.354260 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 10:09:06.354277 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 10:09:06.354291 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2025-09-20 10:09:06.354304 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 10:09:06.354318 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-20 10:09:06.403431 | orchestrator | ++ semver latest 5.0.0 2025-09-20 10:09:06.471074 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-20 10:09:06.471192 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-20 10:09:06.471212 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-20 10:09:06.471225 | orchestrator | + osism apply facts 2025-09-20 10:09:18.590145 | orchestrator | 2025-09-20 10:09:18 | INFO  | Task b6c9a2a0-e6ce-4e98-b247-365ff48e19ac (facts) was prepared for execution. 2025-09-20 10:09:18.590267 | orchestrator | 2025-09-20 10:09:18 | INFO  | It takes a moment until task b6c9a2a0-e6ce-4e98-b247-365ff48e19ac (facts) has been started and output is visible here. 2025-09-20 10:09:32.367330 | orchestrator | 2025-09-20 10:09:32.367439 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-20 10:09:32.367456 | orchestrator | 2025-09-20 10:09:32.367472 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-20 10:09:32.367492 | orchestrator | Saturday 20 September 2025 10:09:22 +0000 (0:00:00.293) 0:00:00.293 **** 2025-09-20 10:09:32.367522 | orchestrator | ok: [testbed-manager] 2025-09-20 10:09:32.367542 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:09:32.367561 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:09:32.367579 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:09:32.367597 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:09:32.367615 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:09:32.367634 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:09:32.367652 | orchestrator | 2025-09-20 10:09:32.367726 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-20 10:09:32.367742 | orchestrator | Saturday 20 September 2025 10:09:24 +0000 (0:00:01.563) 0:00:01.856 **** 2025-09-20 10:09:32.367754 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:09:32.367765 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:09:32.367776 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:09:32.367787 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:09:32.367798 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:09:32.367809 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:09:32.367820 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:09:32.367831 | orchestrator | 2025-09-20 10:09:32.367842 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-20 10:09:32.367854 | orchestrator | 2025-09-20 10:09:32.367865 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-20 10:09:32.367877 | orchestrator | Saturday 20 September 2025 10:09:25 +0000 (0:00:01.374) 0:00:03.230 **** 2025-09-20 10:09:32.367889 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:09:32.367903 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:09:32.367916 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:09:32.367934 | orchestrator | ok: [testbed-manager] 2025-09-20 10:09:32.367963 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:09:32.367984 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:09:32.368002 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:09:32.368020 | orchestrator | 2025-09-20 10:09:32.368038 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-20 10:09:32.368057 | orchestrator | 2025-09-20 10:09:32.368075 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-20 10:09:32.368132 | orchestrator | Saturday 20 September 2025 10:09:31 +0000 (0:00:05.521) 0:00:08.752 **** 2025-09-20 10:09:32.368152 | orchestrator | skipping: [testbed-manager] 2025-09-20 10:09:32.368170 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:09:32.368188 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:09:32.368208 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:09:32.368231 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:09:32.368250 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:09:32.368270 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:09:32.368288 | orchestrator | 2025-09-20 10:09:32.368307 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:09:32.368326 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368347 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368364 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368380 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368399 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368418 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368437 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:09:32.368456 | orchestrator | 2025-09-20 10:09:32.368475 | orchestrator | 2025-09-20 10:09:32.368495 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:09:32.368514 | orchestrator | Saturday 20 September 2025 10:09:31 +0000 (0:00:00.578) 0:00:09.330 **** 2025-09-20 10:09:32.368533 | orchestrator | =============================================================================== 2025-09-20 10:09:32.368550 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.52s 2025-09-20 10:09:32.368567 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.56s 2025-09-20 10:09:32.368585 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2025-09-20 10:09:32.368602 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-09-20 10:09:32.684760 | orchestrator | + osism validate ceph-mons 2025-09-20 10:10:04.686790 | orchestrator | 2025-09-20 10:10:04.686899 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-20 10:10:04.686916 | orchestrator | 2025-09-20 10:10:04.686927 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-20 10:10:04.686939 | orchestrator | Saturday 20 September 2025 10:09:48 +0000 (0:00:00.451) 0:00:00.451 **** 2025-09-20 10:10:04.686950 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:04.686961 | orchestrator | 2025-09-20 10:10:04.686972 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-20 10:10:04.686983 | orchestrator | Saturday 20 September 2025 10:09:49 +0000 (0:00:00.651) 0:00:01.103 **** 2025-09-20 10:10:04.686993 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:04.687004 | orchestrator | 2025-09-20 10:10:04.687015 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-20 10:10:04.687026 | orchestrator | Saturday 20 September 2025 10:09:50 +0000 (0:00:00.855) 0:00:01.959 **** 2025-09-20 10:10:04.687037 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687048 | orchestrator | 2025-09-20 10:10:04.687059 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-20 10:10:04.687088 | orchestrator | Saturday 20 September 2025 10:09:50 +0000 (0:00:00.262) 0:00:02.222 **** 2025-09-20 10:10:04.687100 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687111 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:04.687122 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:04.687133 | orchestrator | 2025-09-20 10:10:04.687144 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-20 10:10:04.687155 | orchestrator | Saturday 20 September 2025 10:09:51 +0000 (0:00:00.309) 0:00:02.531 **** 2025-09-20 10:10:04.687165 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:04.687176 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687186 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:04.687197 | orchestrator | 2025-09-20 10:10:04.687208 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-20 10:10:04.687219 | orchestrator | Saturday 20 September 2025 10:09:52 +0000 (0:00:01.025) 0:00:03.556 **** 2025-09-20 10:10:04.687230 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687241 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:10:04.687252 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:10:04.687262 | orchestrator | 2025-09-20 10:10:04.687273 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-20 10:10:04.687284 | orchestrator | Saturday 20 September 2025 10:09:52 +0000 (0:00:00.297) 0:00:03.853 **** 2025-09-20 10:10:04.687294 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687305 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:04.687316 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:04.687326 | orchestrator | 2025-09-20 10:10:04.687337 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:10:04.687348 | orchestrator | Saturday 20 September 2025 10:09:52 +0000 (0:00:00.487) 0:00:04.341 **** 2025-09-20 10:10:04.687359 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687370 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:04.687381 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:04.687392 | orchestrator | 2025-09-20 10:10:04.687403 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-20 10:10:04.687413 | orchestrator | Saturday 20 September 2025 10:09:53 +0000 (0:00:00.319) 0:00:04.661 **** 2025-09-20 10:10:04.687424 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687435 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:10:04.687446 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:10:04.687457 | orchestrator | 2025-09-20 10:10:04.687467 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-20 10:10:04.687478 | orchestrator | Saturday 20 September 2025 10:09:53 +0000 (0:00:00.286) 0:00:04.947 **** 2025-09-20 10:10:04.687489 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687504 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:04.687515 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:04.687526 | orchestrator | 2025-09-20 10:10:04.687537 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 10:10:04.687547 | orchestrator | Saturday 20 September 2025 10:09:53 +0000 (0:00:00.342) 0:00:05.290 **** 2025-09-20 10:10:04.687558 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687569 | orchestrator | 2025-09-20 10:10:04.687580 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 10:10:04.687591 | orchestrator | Saturday 20 September 2025 10:09:54 +0000 (0:00:00.705) 0:00:05.996 **** 2025-09-20 10:10:04.687601 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687612 | orchestrator | 2025-09-20 10:10:04.687623 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 10:10:04.687634 | orchestrator | Saturday 20 September 2025 10:09:54 +0000 (0:00:00.258) 0:00:06.255 **** 2025-09-20 10:10:04.687644 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687677 | orchestrator | 2025-09-20 10:10:04.687689 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:04.687729 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.270) 0:00:06.525 **** 2025-09-20 10:10:04.687740 | orchestrator | 2025-09-20 10:10:04.687751 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:04.687762 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.069) 0:00:06.595 **** 2025-09-20 10:10:04.687773 | orchestrator | 2025-09-20 10:10:04.687783 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:04.687794 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.073) 0:00:06.668 **** 2025-09-20 10:10:04.687805 | orchestrator | 2025-09-20 10:10:04.687815 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 10:10:04.687826 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.078) 0:00:06.747 **** 2025-09-20 10:10:04.687837 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687847 | orchestrator | 2025-09-20 10:10:04.687858 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-20 10:10:04.687869 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.279) 0:00:07.027 **** 2025-09-20 10:10:04.687880 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.687890 | orchestrator | 2025-09-20 10:10:04.687915 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-20 10:10:04.687928 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.242) 0:00:07.269 **** 2025-09-20 10:10:04.687938 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.687949 | orchestrator | 2025-09-20 10:10:04.687960 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-20 10:10:04.687971 | orchestrator | Saturday 20 September 2025 10:09:55 +0000 (0:00:00.110) 0:00:07.379 **** 2025-09-20 10:10:04.687982 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:10:04.687992 | orchestrator | 2025-09-20 10:10:04.688003 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-20 10:10:04.688014 | orchestrator | Saturday 20 September 2025 10:09:57 +0000 (0:00:01.581) 0:00:08.961 **** 2025-09-20 10:10:04.688066 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688079 | orchestrator | 2025-09-20 10:10:04.688091 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-20 10:10:04.688102 | orchestrator | Saturday 20 September 2025 10:09:57 +0000 (0:00:00.388) 0:00:09.349 **** 2025-09-20 10:10:04.688113 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.688124 | orchestrator | 2025-09-20 10:10:04.688135 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-20 10:10:04.688146 | orchestrator | Saturday 20 September 2025 10:09:58 +0000 (0:00:00.328) 0:00:09.678 **** 2025-09-20 10:10:04.688156 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688167 | orchestrator | 2025-09-20 10:10:04.688179 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-20 10:10:04.688189 | orchestrator | Saturday 20 September 2025 10:09:58 +0000 (0:00:00.326) 0:00:10.005 **** 2025-09-20 10:10:04.688200 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688211 | orchestrator | 2025-09-20 10:10:04.688222 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-20 10:10:04.688233 | orchestrator | Saturday 20 September 2025 10:09:58 +0000 (0:00:00.325) 0:00:10.330 **** 2025-09-20 10:10:04.688244 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.688255 | orchestrator | 2025-09-20 10:10:04.688266 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-20 10:10:04.688277 | orchestrator | Saturday 20 September 2025 10:09:58 +0000 (0:00:00.129) 0:00:10.459 **** 2025-09-20 10:10:04.688288 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688299 | orchestrator | 2025-09-20 10:10:04.688310 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-20 10:10:04.688321 | orchestrator | Saturday 20 September 2025 10:09:59 +0000 (0:00:00.133) 0:00:10.593 **** 2025-09-20 10:10:04.688332 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688343 | orchestrator | 2025-09-20 10:10:04.688360 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-20 10:10:04.688371 | orchestrator | Saturday 20 September 2025 10:09:59 +0000 (0:00:00.112) 0:00:10.706 **** 2025-09-20 10:10:04.688382 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:10:04.688393 | orchestrator | 2025-09-20 10:10:04.688404 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-20 10:10:04.688415 | orchestrator | Saturday 20 September 2025 10:10:00 +0000 (0:00:01.371) 0:00:12.077 **** 2025-09-20 10:10:04.688426 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688437 | orchestrator | 2025-09-20 10:10:04.688448 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-20 10:10:04.688459 | orchestrator | Saturday 20 September 2025 10:10:00 +0000 (0:00:00.304) 0:00:12.381 **** 2025-09-20 10:10:04.688470 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.688481 | orchestrator | 2025-09-20 10:10:04.688492 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-20 10:10:04.688503 | orchestrator | Saturday 20 September 2025 10:10:01 +0000 (0:00:00.134) 0:00:12.515 **** 2025-09-20 10:10:04.688514 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:04.688525 | orchestrator | 2025-09-20 10:10:04.688536 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-20 10:10:04.688547 | orchestrator | Saturday 20 September 2025 10:10:01 +0000 (0:00:00.138) 0:00:12.654 **** 2025-09-20 10:10:04.688557 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.688568 | orchestrator | 2025-09-20 10:10:04.688579 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-20 10:10:04.688590 | orchestrator | Saturday 20 September 2025 10:10:01 +0000 (0:00:00.138) 0:00:12.793 **** 2025-09-20 10:10:04.688601 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.688612 | orchestrator | 2025-09-20 10:10:04.688623 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-20 10:10:04.688634 | orchestrator | Saturday 20 September 2025 10:10:01 +0000 (0:00:00.331) 0:00:13.124 **** 2025-09-20 10:10:04.688645 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:04.688656 | orchestrator | 2025-09-20 10:10:04.688667 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-20 10:10:04.688678 | orchestrator | Saturday 20 September 2025 10:10:01 +0000 (0:00:00.286) 0:00:13.411 **** 2025-09-20 10:10:04.688689 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:04.688726 | orchestrator | 2025-09-20 10:10:04.688738 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 10:10:04.688749 | orchestrator | Saturday 20 September 2025 10:10:02 +0000 (0:00:00.307) 0:00:13.718 **** 2025-09-20 10:10:04.688759 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:04.688771 | orchestrator | 2025-09-20 10:10:04.688782 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 10:10:04.688793 | orchestrator | Saturday 20 September 2025 10:10:03 +0000 (0:00:01.624) 0:00:15.342 **** 2025-09-20 10:10:04.688804 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:04.688815 | orchestrator | 2025-09-20 10:10:04.688826 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 10:10:04.688840 | orchestrator | Saturday 20 September 2025 10:10:04 +0000 (0:00:00.272) 0:00:15.614 **** 2025-09-20 10:10:04.688851 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:04.688862 | orchestrator | 2025-09-20 10:10:04.688880 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:07.173050 | orchestrator | Saturday 20 September 2025 10:10:04 +0000 (0:00:00.281) 0:00:15.895 **** 2025-09-20 10:10:07.173130 | orchestrator | 2025-09-20 10:10:07.173144 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:07.173156 | orchestrator | Saturday 20 September 2025 10:10:04 +0000 (0:00:00.090) 0:00:15.986 **** 2025-09-20 10:10:07.173167 | orchestrator | 2025-09-20 10:10:07.173198 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:07.173210 | orchestrator | Saturday 20 September 2025 10:10:04 +0000 (0:00:00.076) 0:00:16.062 **** 2025-09-20 10:10:07.173221 | orchestrator | 2025-09-20 10:10:07.173233 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-20 10:10:07.173244 | orchestrator | Saturday 20 September 2025 10:10:04 +0000 (0:00:00.074) 0:00:16.137 **** 2025-09-20 10:10:07.173255 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:07.173266 | orchestrator | 2025-09-20 10:10:07.173277 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 10:10:07.173288 | orchestrator | Saturday 20 September 2025 10:10:06 +0000 (0:00:01.559) 0:00:17.696 **** 2025-09-20 10:10:07.173299 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-20 10:10:07.173310 | orchestrator |  "msg": [ 2025-09-20 10:10:07.173322 | orchestrator |  "Validator run completed.", 2025-09-20 10:10:07.173333 | orchestrator |  "You can find the report file here:", 2025-09-20 10:10:07.173345 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-20T10:09:49+00:00-report.json", 2025-09-20 10:10:07.173356 | orchestrator |  "on the following host:", 2025-09-20 10:10:07.173368 | orchestrator |  "testbed-manager" 2025-09-20 10:10:07.173379 | orchestrator |  ] 2025-09-20 10:10:07.173390 | orchestrator | } 2025-09-20 10:10:07.173401 | orchestrator | 2025-09-20 10:10:07.173412 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:10:07.173437 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-20 10:10:07.173450 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:10:07.173462 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:10:07.173473 | orchestrator | 2025-09-20 10:10:07.173484 | orchestrator | 2025-09-20 10:10:07.173496 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:10:07.173507 | orchestrator | Saturday 20 September 2025 10:10:06 +0000 (0:00:00.596) 0:00:18.292 **** 2025-09-20 10:10:07.173518 | orchestrator | =============================================================================== 2025-09-20 10:10:07.173529 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-09-20 10:10:07.173540 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2025-09-20 10:10:07.173552 | orchestrator | Write report file ------------------------------------------------------- 1.56s 2025-09-20 10:10:07.173563 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2025-09-20 10:10:07.173574 | orchestrator | Get container info ------------------------------------------------------ 1.03s 2025-09-20 10:10:07.173585 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-09-20 10:10:07.173600 | orchestrator | Aggregate test results step one ----------------------------------------- 0.71s 2025-09-20 10:10:07.173611 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-09-20 10:10:07.173624 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-09-20 10:10:07.173637 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-09-20 10:10:07.173649 | orchestrator | Set quorum test data ---------------------------------------------------- 0.39s 2025-09-20 10:10:07.173662 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.34s 2025-09-20 10:10:07.173675 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-09-20 10:10:07.173688 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-09-20 10:10:07.173730 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-09-20 10:10:07.173744 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.33s 2025-09-20 10:10:07.173756 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-09-20 10:10:07.173768 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-09-20 10:10:07.173781 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.31s 2025-09-20 10:10:07.173794 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-09-20 10:10:07.499557 | orchestrator | + osism validate ceph-mgrs 2025-09-20 10:10:29.174813 | orchestrator | 2025-09-20 10:10:29.174927 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-20 10:10:29.174944 | orchestrator | 2025-09-20 10:10:29.174956 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-20 10:10:29.174968 | orchestrator | Saturday 20 September 2025 10:10:14 +0000 (0:00:00.457) 0:00:00.458 **** 2025-09-20 10:10:29.174980 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.174991 | orchestrator | 2025-09-20 10:10:29.175002 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-20 10:10:29.175013 | orchestrator | Saturday 20 September 2025 10:10:14 +0000 (0:00:00.677) 0:00:01.135 **** 2025-09-20 10:10:29.175024 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.175035 | orchestrator | 2025-09-20 10:10:29.175046 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-20 10:10:29.175057 | orchestrator | Saturday 20 September 2025 10:10:15 +0000 (0:00:00.883) 0:00:02.019 **** 2025-09-20 10:10:29.175068 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.175080 | orchestrator | 2025-09-20 10:10:29.175091 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-20 10:10:29.175101 | orchestrator | Saturday 20 September 2025 10:10:15 +0000 (0:00:00.254) 0:00:02.273 **** 2025-09-20 10:10:29.175112 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.175123 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:29.175134 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:29.175144 | orchestrator | 2025-09-20 10:10:29.175156 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-20 10:10:29.175168 | orchestrator | Saturday 20 September 2025 10:10:16 +0000 (0:00:00.305) 0:00:02.579 **** 2025-09-20 10:10:29.175178 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:29.175189 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.175200 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:29.175211 | orchestrator | 2025-09-20 10:10:29.175222 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-20 10:10:29.175233 | orchestrator | Saturday 20 September 2025 10:10:17 +0000 (0:00:01.085) 0:00:03.665 **** 2025-09-20 10:10:29.175244 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175256 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:10:29.175267 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:10:29.175277 | orchestrator | 2025-09-20 10:10:29.175288 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-20 10:10:29.175299 | orchestrator | Saturday 20 September 2025 10:10:17 +0000 (0:00:00.345) 0:00:04.011 **** 2025-09-20 10:10:29.175310 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.175322 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:29.175334 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:29.175347 | orchestrator | 2025-09-20 10:10:29.175360 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:10:29.175372 | orchestrator | Saturday 20 September 2025 10:10:18 +0000 (0:00:00.506) 0:00:04.517 **** 2025-09-20 10:10:29.175385 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.175398 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:29.175411 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:29.175423 | orchestrator | 2025-09-20 10:10:29.175435 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-20 10:10:29.175473 | orchestrator | Saturday 20 September 2025 10:10:18 +0000 (0:00:00.329) 0:00:04.847 **** 2025-09-20 10:10:29.175487 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175500 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:10:29.175513 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:10:29.175526 | orchestrator | 2025-09-20 10:10:29.175539 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-20 10:10:29.175551 | orchestrator | Saturday 20 September 2025 10:10:18 +0000 (0:00:00.309) 0:00:05.157 **** 2025-09-20 10:10:29.175563 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.175575 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:10:29.175587 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:10:29.175600 | orchestrator | 2025-09-20 10:10:29.175612 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 10:10:29.175625 | orchestrator | Saturday 20 September 2025 10:10:19 +0000 (0:00:00.294) 0:00:05.451 **** 2025-09-20 10:10:29.175637 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175650 | orchestrator | 2025-09-20 10:10:29.175663 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 10:10:29.175675 | orchestrator | Saturday 20 September 2025 10:10:19 +0000 (0:00:00.671) 0:00:06.122 **** 2025-09-20 10:10:29.175700 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175738 | orchestrator | 2025-09-20 10:10:29.175751 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 10:10:29.175762 | orchestrator | Saturday 20 September 2025 10:10:19 +0000 (0:00:00.256) 0:00:06.378 **** 2025-09-20 10:10:29.175773 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175784 | orchestrator | 2025-09-20 10:10:29.175795 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:29.175806 | orchestrator | Saturday 20 September 2025 10:10:20 +0000 (0:00:00.259) 0:00:06.638 **** 2025-09-20 10:10:29.175817 | orchestrator | 2025-09-20 10:10:29.175827 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:29.175838 | orchestrator | Saturday 20 September 2025 10:10:20 +0000 (0:00:00.072) 0:00:06.710 **** 2025-09-20 10:10:29.175849 | orchestrator | 2025-09-20 10:10:29.175860 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:29.175871 | orchestrator | Saturday 20 September 2025 10:10:20 +0000 (0:00:00.072) 0:00:06.783 **** 2025-09-20 10:10:29.175882 | orchestrator | 2025-09-20 10:10:29.175893 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 10:10:29.175904 | orchestrator | Saturday 20 September 2025 10:10:20 +0000 (0:00:00.075) 0:00:06.858 **** 2025-09-20 10:10:29.175914 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175925 | orchestrator | 2025-09-20 10:10:29.175936 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-20 10:10:29.175947 | orchestrator | Saturday 20 September 2025 10:10:20 +0000 (0:00:00.290) 0:00:07.148 **** 2025-09-20 10:10:29.175958 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.175969 | orchestrator | 2025-09-20 10:10:29.175996 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-20 10:10:29.176008 | orchestrator | Saturday 20 September 2025 10:10:21 +0000 (0:00:00.286) 0:00:07.435 **** 2025-09-20 10:10:29.176082 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.176096 | orchestrator | 2025-09-20 10:10:29.176108 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-20 10:10:29.176119 | orchestrator | Saturday 20 September 2025 10:10:21 +0000 (0:00:00.131) 0:00:07.567 **** 2025-09-20 10:10:29.176130 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:10:29.176140 | orchestrator | 2025-09-20 10:10:29.176151 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-20 10:10:29.176162 | orchestrator | Saturday 20 September 2025 10:10:23 +0000 (0:00:01.997) 0:00:09.565 **** 2025-09-20 10:10:29.176173 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.176195 | orchestrator | 2025-09-20 10:10:29.176207 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-20 10:10:29.176218 | orchestrator | Saturday 20 September 2025 10:10:23 +0000 (0:00:00.242) 0:00:09.807 **** 2025-09-20 10:10:29.176229 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.176240 | orchestrator | 2025-09-20 10:10:29.176251 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-20 10:10:29.176262 | orchestrator | Saturday 20 September 2025 10:10:24 +0000 (0:00:00.767) 0:00:10.574 **** 2025-09-20 10:10:29.176272 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.176284 | orchestrator | 2025-09-20 10:10:29.176295 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-20 10:10:29.176330 | orchestrator | Saturday 20 September 2025 10:10:24 +0000 (0:00:00.136) 0:00:10.710 **** 2025-09-20 10:10:29.176341 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:10:29.176352 | orchestrator | 2025-09-20 10:10:29.176363 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-20 10:10:29.176374 | orchestrator | Saturday 20 September 2025 10:10:24 +0000 (0:00:00.145) 0:00:10.856 **** 2025-09-20 10:10:29.176385 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.176396 | orchestrator | 2025-09-20 10:10:29.176408 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-20 10:10:29.176419 | orchestrator | Saturday 20 September 2025 10:10:24 +0000 (0:00:00.258) 0:00:11.115 **** 2025-09-20 10:10:29.176430 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:10:29.176441 | orchestrator | 2025-09-20 10:10:29.176452 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 10:10:29.176463 | orchestrator | Saturday 20 September 2025 10:10:24 +0000 (0:00:00.240) 0:00:11.355 **** 2025-09-20 10:10:29.176474 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.176484 | orchestrator | 2025-09-20 10:10:29.176495 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 10:10:29.176506 | orchestrator | Saturday 20 September 2025 10:10:26 +0000 (0:00:01.315) 0:00:12.670 **** 2025-09-20 10:10:29.176517 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.176529 | orchestrator | 2025-09-20 10:10:29.176540 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 10:10:29.176550 | orchestrator | Saturday 20 September 2025 10:10:26 +0000 (0:00:00.245) 0:00:12.916 **** 2025-09-20 10:10:29.176562 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.176573 | orchestrator | 2025-09-20 10:10:29.176584 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:29.176595 | orchestrator | Saturday 20 September 2025 10:10:26 +0000 (0:00:00.270) 0:00:13.186 **** 2025-09-20 10:10:29.176606 | orchestrator | 2025-09-20 10:10:29.176617 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:29.176627 | orchestrator | Saturday 20 September 2025 10:10:26 +0000 (0:00:00.067) 0:00:13.254 **** 2025-09-20 10:10:29.176638 | orchestrator | 2025-09-20 10:10:29.176649 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:29.176660 | orchestrator | Saturday 20 September 2025 10:10:26 +0000 (0:00:00.067) 0:00:13.322 **** 2025-09-20 10:10:29.176671 | orchestrator | 2025-09-20 10:10:29.176682 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-20 10:10:29.176693 | orchestrator | Saturday 20 September 2025 10:10:26 +0000 (0:00:00.071) 0:00:13.393 **** 2025-09-20 10:10:29.176705 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:29.176762 | orchestrator | 2025-09-20 10:10:29.176775 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 10:10:29.176786 | orchestrator | Saturday 20 September 2025 10:10:28 +0000 (0:00:01.720) 0:00:15.113 **** 2025-09-20 10:10:29.176796 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-20 10:10:29.176807 | orchestrator |  "msg": [ 2025-09-20 10:10:29.176826 | orchestrator |  "Validator run completed.", 2025-09-20 10:10:29.176837 | orchestrator |  "You can find the report file here:", 2025-09-20 10:10:29.176848 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-20T10:10:14+00:00-report.json", 2025-09-20 10:10:29.176861 | orchestrator |  "on the following host:", 2025-09-20 10:10:29.176871 | orchestrator |  "testbed-manager" 2025-09-20 10:10:29.176882 | orchestrator |  ] 2025-09-20 10:10:29.176893 | orchestrator | } 2025-09-20 10:10:29.176904 | orchestrator | 2025-09-20 10:10:29.176915 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:10:29.176927 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 10:10:29.176939 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:10:29.176960 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:10:29.502293 | orchestrator | 2025-09-20 10:10:29.502371 | orchestrator | 2025-09-20 10:10:29.502382 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:10:29.502393 | orchestrator | Saturday 20 September 2025 10:10:29 +0000 (0:00:00.449) 0:00:15.563 **** 2025-09-20 10:10:29.502403 | orchestrator | =============================================================================== 2025-09-20 10:10:29.502413 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.00s 2025-09-20 10:10:29.502442 | orchestrator | Write report file ------------------------------------------------------- 1.72s 2025-09-20 10:10:29.502452 | orchestrator | Aggregate test results step one ----------------------------------------- 1.32s 2025-09-20 10:10:29.502461 | orchestrator | Get container info ------------------------------------------------------ 1.09s 2025-09-20 10:10:29.502471 | orchestrator | Create report output directory ------------------------------------------ 0.88s 2025-09-20 10:10:29.502481 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.77s 2025-09-20 10:10:29.502490 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-09-20 10:10:29.502500 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-09-20 10:10:29.502510 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2025-09-20 10:10:29.502519 | orchestrator | Print report file information ------------------------------------------- 0.45s 2025-09-20 10:10:29.502529 | orchestrator | Set test result to failed if container is missing ----------------------- 0.35s 2025-09-20 10:10:29.502538 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-09-20 10:10:29.502548 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-09-20 10:10:29.502557 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-09-20 10:10:29.502567 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-09-20 10:10:29.502576 | orchestrator | Print report file information ------------------------------------------- 0.29s 2025-09-20 10:10:29.502586 | orchestrator | Fail due to missing containers ------------------------------------------ 0.29s 2025-09-20 10:10:29.502595 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2025-09-20 10:10:29.502605 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-09-20 10:10:29.502615 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-09-20 10:10:29.821540 | orchestrator | + osism validate ceph-osds 2025-09-20 10:10:50.946929 | orchestrator | 2025-09-20 10:10:50.947026 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-20 10:10:50.947039 | orchestrator | 2025-09-20 10:10:50.947049 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-20 10:10:50.947079 | orchestrator | Saturday 20 September 2025 10:10:46 +0000 (0:00:00.461) 0:00:00.461 **** 2025-09-20 10:10:50.947089 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:50.947098 | orchestrator | 2025-09-20 10:10:50.947107 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-20 10:10:50.947116 | orchestrator | Saturday 20 September 2025 10:10:47 +0000 (0:00:00.678) 0:00:01.140 **** 2025-09-20 10:10:50.947124 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:50.947133 | orchestrator | 2025-09-20 10:10:50.947142 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-20 10:10:50.947173 | orchestrator | Saturday 20 September 2025 10:10:47 +0000 (0:00:00.246) 0:00:01.386 **** 2025-09-20 10:10:50.947183 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:10:50.947192 | orchestrator | 2025-09-20 10:10:50.947201 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-20 10:10:50.947209 | orchestrator | Saturday 20 September 2025 10:10:48 +0000 (0:00:01.044) 0:00:02.430 **** 2025-09-20 10:10:50.947218 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:50.947228 | orchestrator | 2025-09-20 10:10:50.947237 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-20 10:10:50.947255 | orchestrator | Saturday 20 September 2025 10:10:48 +0000 (0:00:00.121) 0:00:02.552 **** 2025-09-20 10:10:50.947265 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:50.947274 | orchestrator | 2025-09-20 10:10:50.947283 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-20 10:10:50.947292 | orchestrator | Saturday 20 September 2025 10:10:48 +0000 (0:00:00.133) 0:00:02.685 **** 2025-09-20 10:10:50.947300 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:50.947309 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:10:50.947318 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:10:50.947327 | orchestrator | 2025-09-20 10:10:50.947336 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-20 10:10:50.947345 | orchestrator | Saturday 20 September 2025 10:10:48 +0000 (0:00:00.297) 0:00:02.982 **** 2025-09-20 10:10:50.947353 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:50.947362 | orchestrator | 2025-09-20 10:10:50.947371 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-20 10:10:50.947380 | orchestrator | Saturday 20 September 2025 10:10:49 +0000 (0:00:00.171) 0:00:03.153 **** 2025-09-20 10:10:50.947388 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:50.947397 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:50.947406 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:50.947415 | orchestrator | 2025-09-20 10:10:50.947423 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-20 10:10:50.947432 | orchestrator | Saturday 20 September 2025 10:10:49 +0000 (0:00:00.327) 0:00:03.481 **** 2025-09-20 10:10:50.947441 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:50.947450 | orchestrator | 2025-09-20 10:10:50.947459 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:10:50.947468 | orchestrator | Saturday 20 September 2025 10:10:50 +0000 (0:00:00.552) 0:00:04.033 **** 2025-09-20 10:10:50.947477 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:50.947486 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:50.947497 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:50.947507 | orchestrator | 2025-09-20 10:10:50.947516 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-20 10:10:50.947527 | orchestrator | Saturday 20 September 2025 10:10:50 +0000 (0:00:00.568) 0:00:04.602 **** 2025-09-20 10:10:50.947539 | orchestrator | skipping: [testbed-node-3] => (item={'id': '67888597c5d78926b64e5cb6c7c014480c7c2262358caeb9ce47b8e6c4cd3329', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 10:10:50.947552 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ab11a2e0426a6890ab3407be3bb8127ae8ec84d25295a7c39f2a4deb0ccaec2', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-20 10:10:50.947569 | orchestrator | skipping: [testbed-node-3] => (item={'id': '27dfacf8102b678d13f3e8be944bc930b2eafa8fe4640950ef28093a7f6ded78', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-20 10:10:50.947582 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f5281535bfec3b1d888fb4355df245a94f740d8bbbc108089cf7821611db9101', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-20 10:10:50.947598 | orchestrator | skipping: [testbed-node-3] => (item={'id': '047342939c767e276225034e2e757282a47d95b82bf52b2022a459ac8dd8631e', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-20 10:10:50.947623 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c42a0fe6895ac4cb958856df5c2213dfae81b87a2867a9cf8895bd55ae7d91aa', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-20 10:10:50.947634 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a3f3ae5c249c7598e25646247e608887cebc454f22dd63b1a9304425750f418a', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-20 10:10:50.947645 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c90fa43c8b1cadb3695d4e2026e22d78ec007fe8a39b589b50ae7a31e1ebfbf3', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-20 10:10:50.947655 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c0ba196e0905664eff54a831511cfcdadeda9b16a887636c424b0e7500537bc5', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-20 10:10:50.947672 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2d855076dd7f63f4b9824a53541d04b7502d2c64d0710ef869631bcdc7411d6d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-20 10:10:50.947683 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5611e4a2a01905f1c632cde9af9becb5539edd15ec8e0b157d9b9c165d55d469', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-20 10:10:50.947693 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6f6876e35c350dbbb45d1d18121b38f6b4bee08f6567ebb0d9ad3009e366dbff', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-20 10:10:50.947704 | orchestrator | ok: [testbed-node-3] => (item={'id': 'b101aaa1d3081688fd074a2aab842a02099694b346880ff9c0d5a73feec1cda3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-20 10:10:50.947715 | orchestrator | ok: [testbed-node-3] => (item={'id': '5fbcdf5cf99016277a85993f6ca964918f8d1ee36125f9ca19e61044d59450e5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-20 10:10:50.947745 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1f44aad9f08a856130a28152fe306ed0a368a4234d5b20144f45302e45e77d7a', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 10:10:50.947756 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24ae986dc85e6d677b1eb0baa60ccfdb843e206d15a3a2a747f1b95793cd0916', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-20 10:10:50.947772 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e47464eae7e267072d0ae2f6a0c90872d01a8595efb06ad1521571fe1c7023c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-20 10:10:50.947783 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8b79049f175a7678f071860ea880803be70130b141acf3d301307ce756861552', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-20 10:10:50.947793 | orchestrator | skipping: [testbed-node-3] => (item={'id': '760bcae1372a3250e10f783a93a9f7e1fb6c42d0051be6e9371ba6969d1778ba', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-20 10:10:50.947803 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40a6e8edf6fc30418c4d361b9d1d3027318dfc25d3ec7255cfb798e9fc6827ec', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-20 10:10:50.947813 | orchestrator | skipping: [testbed-node-4] => (item={'id': '185f901c7774b626b768c500915ce977c6e92aecaf949b2e4c4e7c752d3fcbc0', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 10:10:50.947830 | orchestrator | skipping: [testbed-node-4] => (item={'id': '518d2e8f6f8db1f71ed6d01ff80ff9afdd1a298470f233b92889238456766d6e', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-20 10:10:51.206134 | orchestrator | skipping: [testbed-node-4] => (item={'id': '032ced68388966681e3c045ed232e0a2a9d0fb203e6bcd0efe182ea71b44ce24', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-20 10:10:51.206231 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae15cb4a037fe46dbcea5baa70aa33eb06c6b51fb95eda04ae9ef7b854079e04', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-20 10:10:51.206248 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'db506ff541999541da5bf757986f8b056461dcc32fff76fdf25b0bb55de0a9d8', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-20 10:10:51.206261 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6169387f836e16f17bc0c6ad6b40e64f726ec36f0c7b6f5e787f63f7a961c8fc', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-20 10:10:51.206273 | orchestrator | skipping: [testbed-node-4] => (item={'id': '120ae2e9b2f4fef23f12d35023ff297af5275dc5d5d005b22b502f8f6f102939', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-20 10:10:51.206285 | orchestrator | skipping: [testbed-node-4] => (item={'id': '26c5e2b5ab36d8a4798c8082c76be4374f3ebac00571f8dd42b53029f5f6a09d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-20 10:10:51.206296 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2d6dc7c1d08cfeeea5096eab14af5a92fd14261b9c5df8adaf06bffce0a9da8c', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-20 10:10:51.206332 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f2cf0d85f178ab09bebfac8882f5f5954f256b32335be3f80cb66c090bf0d36e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-20 10:10:51.206361 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1535a3d27cf15426c2fe63776ca2921016174592932f9e3b4f6bf6ba35d52776', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-20 10:10:51.206373 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a03fa3fc50404c20d9b3162899e18ce9276b2bbfa1702a6e42ab615854fe2412', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-20 10:10:51.206386 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e6f32d87bb7e2935c9a204568e1fa7b2a9aae901a159eac3b83b3973e42a993f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-20 10:10:51.206398 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ecc540bc7ff151395610f1695cbeb744b3602c48443d6b4114568c07fcb423e6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-20 10:10:51.206409 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd580542008ac3ac17d9f0e367e65d979972f0fae5922eda030908b9dd0f77b42', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 10:10:51.206421 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0148ef2d65995c1033232e7ee9b893764a62c19f915cccefd9c64e1152722b31', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-20 10:10:51.206433 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b34f2bc5620a710c23dc573e2e174725473e8216c0177003efc401803c11cd08', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-20 10:10:51.206478 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5448de07a0eb43789885717c23f7ae206e199374c7023f3957c26ad9675718bc', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-20 10:10:51.206502 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0158e80807f98ac3d11160da79e5b473f9105857827049173a018fcc682d415b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-20 10:10:51.206515 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a5c75b2516806163c6cf25563217be698899eb4fd38fbeab5f25a5c1bc0052ee', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-20 10:10:51.206526 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4adfa9ba52bed1ba43e7f7d768732a7366048b618c343bfa09e5e1970264c7b1', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-20 10:10:51.206549 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30984990aa919db6e676c238604f84fb1083f1be533e8b6ecbdb31d5d2c33c19', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-20 10:10:51.206561 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aee62b1f51551ca16f4b44a03d733f8adae5d2f58c4944a500dc52221405c14b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-20 10:10:51.206572 | orchestrator | skipping: [testbed-node-5] => (item={'id': '89248206a755ad1a4e2ffafe397e02a2f3e2ab4d5f06e7b8a93acf9e15999994', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-20 10:10:51.206591 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd1d8f284dbb80bef3ea5f635cbcfdcbc1b82e78863f852f61afb7daefdb38ff1', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-20 10:10:51.206602 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6354ee0a856e837f068be1817bedaa33495a0a60f4d1278ffb02b6703f156845', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-09-20 10:10:51.206614 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a6c1d078ede171b7ae6aeb9324c4311eb2642feaaef5fd854bdc71c842ba397', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-20 10:10:51.206625 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b9a032ecccf82b8dd18bee947bf7826884fa5cd6f7429b9d89fc36201a9ff0f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-20 10:10:51.206638 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f63158446fb6c16f03310946d1e8370438eec2aad756477476ccc18c42731f1b', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-20 10:10:51.206651 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7d19357f182d9eadf9879c9e78980d9db49b3052f6a732d2681e47882c36f2fd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-20 10:10:51.206665 | orchestrator | skipping: [testbed-node-5] => (item={'id': '08ddb957f21fae1a7125c52cc8669b930f5260edf32f91d7a6fc266b20bd3b4b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-20 10:10:51.206678 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e5a02f042ab4712b01f9d7bc2fa79def7481ad4dc8eb80e027f245cee50ef33e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-20 10:10:51.206699 | orchestrator | ok: [testbed-node-5] => (item={'id': '8930febb20ca84b789e8809dfc44297760d70d24b96d2ed90ce611b035e89028', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-20 10:10:59.384094 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fdbf1fe59ddaba6a5c1a7ccf01bb6113c14ccad21061ae2ca70f6144999e717a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-20 10:10:59.384196 | orchestrator | skipping: [testbed-node-5] => (item={'id': '700e44b8af7495af8725415c6861b9934c8e66ad20d1b6cee06580354ffd7fd6', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-20 10:10:59.384213 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ed2a0b1c6067cedc4ac8028f1826aea359221935704f2bedc0d0f6cb96dbd35d', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-20 10:10:59.384242 | orchestrator | skipping: [testbed-node-5] => (item={'id': '87308692ec0544fcc0dd06972659936e69e443fd06e508983c09bd5cad2f6104', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-20 10:10:59.384279 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0fae332844a803a7a9e1c653e70ff37b8003511df7235f2ec37d482226f87896', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-20 10:10:59.384292 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b2a818897669ec39e6c10b84f027fcfc7638dfd09202b463a3f1c4052ea0f642', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-20 10:10:59.384303 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0daee7fcf3896273b286f5105185b042d93217f90b0e88daaebe9bef648ca1ef', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-20 10:10:59.384315 | orchestrator | 2025-09-20 10:10:59.384327 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-20 10:10:59.384339 | orchestrator | Saturday 20 September 2025 10:10:51 +0000 (0:00:00.608) 0:00:05.210 **** 2025-09-20 10:10:59.384350 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.384362 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.384372 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.384383 | orchestrator | 2025-09-20 10:10:59.384394 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-20 10:10:59.384404 | orchestrator | Saturday 20 September 2025 10:10:51 +0000 (0:00:00.314) 0:00:05.525 **** 2025-09-20 10:10:59.384443 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.384457 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:10:59.384467 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:10:59.384478 | orchestrator | 2025-09-20 10:10:59.384489 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-20 10:10:59.384500 | orchestrator | Saturday 20 September 2025 10:10:51 +0000 (0:00:00.290) 0:00:05.816 **** 2025-09-20 10:10:59.384511 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.384522 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.384532 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.384543 | orchestrator | 2025-09-20 10:10:59.384554 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:10:59.384565 | orchestrator | Saturday 20 September 2025 10:10:52 +0000 (0:00:00.530) 0:00:06.346 **** 2025-09-20 10:10:59.384575 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.384586 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.384597 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.384608 | orchestrator | 2025-09-20 10:10:59.384619 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-20 10:10:59.384630 | orchestrator | Saturday 20 September 2025 10:10:52 +0000 (0:00:00.315) 0:00:06.661 **** 2025-09-20 10:10:59.384642 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-20 10:10:59.384657 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-20 10:10:59.384670 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.384682 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-20 10:10:59.384695 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-20 10:10:59.384707 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:10:59.384719 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-20 10:10:59.384754 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-20 10:10:59.384767 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:10:59.384779 | orchestrator | 2025-09-20 10:10:59.384791 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-20 10:10:59.384804 | orchestrator | Saturday 20 September 2025 10:10:52 +0000 (0:00:00.338) 0:00:07.000 **** 2025-09-20 10:10:59.384825 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.384837 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.384850 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.384863 | orchestrator | 2025-09-20 10:10:59.384891 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-20 10:10:59.384904 | orchestrator | Saturday 20 September 2025 10:10:53 +0000 (0:00:00.329) 0:00:07.330 **** 2025-09-20 10:10:59.384917 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.384930 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:10:59.384941 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:10:59.384953 | orchestrator | 2025-09-20 10:10:59.384965 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-20 10:10:59.384977 | orchestrator | Saturday 20 September 2025 10:10:53 +0000 (0:00:00.469) 0:00:07.799 **** 2025-09-20 10:10:59.384989 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385002 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:10:59.385012 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:10:59.385023 | orchestrator | 2025-09-20 10:10:59.385034 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-20 10:10:59.385045 | orchestrator | Saturday 20 September 2025 10:10:54 +0000 (0:00:00.312) 0:00:08.112 **** 2025-09-20 10:10:59.385056 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385066 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.385077 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.385088 | orchestrator | 2025-09-20 10:10:59.385099 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 10:10:59.385111 | orchestrator | Saturday 20 September 2025 10:10:54 +0000 (0:00:00.318) 0:00:08.431 **** 2025-09-20 10:10:59.385122 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385132 | orchestrator | 2025-09-20 10:10:59.385143 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 10:10:59.385154 | orchestrator | Saturday 20 September 2025 10:10:54 +0000 (0:00:00.247) 0:00:08.678 **** 2025-09-20 10:10:59.385165 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385176 | orchestrator | 2025-09-20 10:10:59.385187 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 10:10:59.385198 | orchestrator | Saturday 20 September 2025 10:10:54 +0000 (0:00:00.243) 0:00:08.922 **** 2025-09-20 10:10:59.385209 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385220 | orchestrator | 2025-09-20 10:10:59.385231 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:59.385242 | orchestrator | Saturday 20 September 2025 10:10:55 +0000 (0:00:00.247) 0:00:09.169 **** 2025-09-20 10:10:59.385252 | orchestrator | 2025-09-20 10:10:59.385263 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:59.385274 | orchestrator | Saturday 20 September 2025 10:10:55 +0000 (0:00:00.069) 0:00:09.239 **** 2025-09-20 10:10:59.385285 | orchestrator | 2025-09-20 10:10:59.385296 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:10:59.385307 | orchestrator | Saturday 20 September 2025 10:10:55 +0000 (0:00:00.067) 0:00:09.307 **** 2025-09-20 10:10:59.385317 | orchestrator | 2025-09-20 10:10:59.385328 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 10:10:59.385339 | orchestrator | Saturday 20 September 2025 10:10:55 +0000 (0:00:00.278) 0:00:09.586 **** 2025-09-20 10:10:59.385350 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385361 | orchestrator | 2025-09-20 10:10:59.385372 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-20 10:10:59.385382 | orchestrator | Saturday 20 September 2025 10:10:55 +0000 (0:00:00.249) 0:00:09.835 **** 2025-09-20 10:10:59.385393 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385404 | orchestrator | 2025-09-20 10:10:59.385415 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:10:59.385426 | orchestrator | Saturday 20 September 2025 10:10:56 +0000 (0:00:00.267) 0:00:10.103 **** 2025-09-20 10:10:59.385442 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385454 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.385465 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.385475 | orchestrator | 2025-09-20 10:10:59.385486 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-20 10:10:59.385497 | orchestrator | Saturday 20 September 2025 10:10:56 +0000 (0:00:00.327) 0:00:10.430 **** 2025-09-20 10:10:59.385508 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385519 | orchestrator | 2025-09-20 10:10:59.385530 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-20 10:10:59.385541 | orchestrator | Saturday 20 September 2025 10:10:56 +0000 (0:00:00.233) 0:00:10.663 **** 2025-09-20 10:10:59.385552 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-20 10:10:59.385562 | orchestrator | 2025-09-20 10:10:59.385574 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-20 10:10:59.385584 | orchestrator | Saturday 20 September 2025 10:10:58 +0000 (0:00:01.555) 0:00:12.219 **** 2025-09-20 10:10:59.385595 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385606 | orchestrator | 2025-09-20 10:10:59.385617 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-20 10:10:59.385628 | orchestrator | Saturday 20 September 2025 10:10:58 +0000 (0:00:00.137) 0:00:12.357 **** 2025-09-20 10:10:59.385639 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385650 | orchestrator | 2025-09-20 10:10:59.385661 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-20 10:10:59.385672 | orchestrator | Saturday 20 September 2025 10:10:58 +0000 (0:00:00.320) 0:00:12.678 **** 2025-09-20 10:10:59.385683 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:10:59.385694 | orchestrator | 2025-09-20 10:10:59.385705 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-20 10:10:59.385716 | orchestrator | Saturday 20 September 2025 10:10:58 +0000 (0:00:00.114) 0:00:12.793 **** 2025-09-20 10:10:59.385743 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385755 | orchestrator | 2025-09-20 10:10:59.385807 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:10:59.385819 | orchestrator | Saturday 20 September 2025 10:10:58 +0000 (0:00:00.118) 0:00:12.912 **** 2025-09-20 10:10:59.385830 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:10:59.385841 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:10:59.385852 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:10:59.385863 | orchestrator | 2025-09-20 10:10:59.385874 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-20 10:10:59.385892 | orchestrator | Saturday 20 September 2025 10:10:59 +0000 (0:00:00.489) 0:00:13.401 **** 2025-09-20 10:11:11.608856 | orchestrator | changed: [testbed-node-3] 2025-09-20 10:11:11.608961 | orchestrator | changed: [testbed-node-4] 2025-09-20 10:11:11.608977 | orchestrator | changed: [testbed-node-5] 2025-09-20 10:11:11.608990 | orchestrator | 2025-09-20 10:11:11.609003 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-20 10:11:11.609016 | orchestrator | Saturday 20 September 2025 10:11:01 +0000 (0:00:02.534) 0:00:15.935 **** 2025-09-20 10:11:11.609028 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609040 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609051 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609063 | orchestrator | 2025-09-20 10:11:11.609075 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-20 10:11:11.609086 | orchestrator | Saturday 20 September 2025 10:11:02 +0000 (0:00:00.334) 0:00:16.270 **** 2025-09-20 10:11:11.609098 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609109 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609121 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609133 | orchestrator | 2025-09-20 10:11:11.609144 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-20 10:11:11.609157 | orchestrator | Saturday 20 September 2025 10:11:02 +0000 (0:00:00.494) 0:00:16.764 **** 2025-09-20 10:11:11.609187 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:11:11.609199 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:11:11.609218 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:11:11.609230 | orchestrator | 2025-09-20 10:11:11.609241 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-20 10:11:11.609253 | orchestrator | Saturday 20 September 2025 10:11:03 +0000 (0:00:00.569) 0:00:17.334 **** 2025-09-20 10:11:11.609264 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609276 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609287 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609298 | orchestrator | 2025-09-20 10:11:11.609310 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-20 10:11:11.609321 | orchestrator | Saturday 20 September 2025 10:11:03 +0000 (0:00:00.320) 0:00:17.654 **** 2025-09-20 10:11:11.609332 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:11:11.609344 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:11:11.609355 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:11:11.609366 | orchestrator | 2025-09-20 10:11:11.609377 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-20 10:11:11.609389 | orchestrator | Saturday 20 September 2025 10:11:03 +0000 (0:00:00.297) 0:00:17.952 **** 2025-09-20 10:11:11.609400 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:11:11.609411 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:11:11.609422 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:11:11.609509 | orchestrator | 2025-09-20 10:11:11.609521 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-20 10:11:11.609532 | orchestrator | Saturday 20 September 2025 10:11:04 +0000 (0:00:00.280) 0:00:18.232 **** 2025-09-20 10:11:11.609543 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609553 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609564 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609574 | orchestrator | 2025-09-20 10:11:11.609585 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-20 10:11:11.609596 | orchestrator | Saturday 20 September 2025 10:11:04 +0000 (0:00:00.748) 0:00:18.980 **** 2025-09-20 10:11:11.609607 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609617 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609628 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609638 | orchestrator | 2025-09-20 10:11:11.609649 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-20 10:11:11.609660 | orchestrator | Saturday 20 September 2025 10:11:05 +0000 (0:00:00.568) 0:00:19.549 **** 2025-09-20 10:11:11.609670 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609681 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609691 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609702 | orchestrator | 2025-09-20 10:11:11.609713 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-20 10:11:11.609723 | orchestrator | Saturday 20 September 2025 10:11:05 +0000 (0:00:00.337) 0:00:19.886 **** 2025-09-20 10:11:11.609758 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:11:11.609770 | orchestrator | skipping: [testbed-node-4] 2025-09-20 10:11:11.609780 | orchestrator | skipping: [testbed-node-5] 2025-09-20 10:11:11.609791 | orchestrator | 2025-09-20 10:11:11.609801 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-20 10:11:11.609812 | orchestrator | Saturday 20 September 2025 10:11:06 +0000 (0:00:00.340) 0:00:20.226 **** 2025-09-20 10:11:11.609823 | orchestrator | ok: [testbed-node-3] 2025-09-20 10:11:11.609833 | orchestrator | ok: [testbed-node-4] 2025-09-20 10:11:11.609844 | orchestrator | ok: [testbed-node-5] 2025-09-20 10:11:11.609855 | orchestrator | 2025-09-20 10:11:11.609866 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-20 10:11:11.609877 | orchestrator | Saturday 20 September 2025 10:11:06 +0000 (0:00:00.528) 0:00:20.755 **** 2025-09-20 10:11:11.609888 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:11:11.609899 | orchestrator | 2025-09-20 10:11:11.609918 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-20 10:11:11.609928 | orchestrator | Saturday 20 September 2025 10:11:06 +0000 (0:00:00.243) 0:00:20.999 **** 2025-09-20 10:11:11.609939 | orchestrator | skipping: [testbed-node-3] 2025-09-20 10:11:11.609950 | orchestrator | 2025-09-20 10:11:11.609961 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-20 10:11:11.609972 | orchestrator | Saturday 20 September 2025 10:11:07 +0000 (0:00:00.232) 0:00:21.231 **** 2025-09-20 10:11:11.609982 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:11:11.609993 | orchestrator | 2025-09-20 10:11:11.610004 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-20 10:11:11.610071 | orchestrator | Saturday 20 September 2025 10:11:08 +0000 (0:00:01.437) 0:00:22.669 **** 2025-09-20 10:11:11.610087 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:11:11.610098 | orchestrator | 2025-09-20 10:11:11.610109 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-20 10:11:11.610120 | orchestrator | Saturday 20 September 2025 10:11:08 +0000 (0:00:00.244) 0:00:22.913 **** 2025-09-20 10:11:11.610150 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:11:11.610161 | orchestrator | 2025-09-20 10:11:11.610172 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:11:11.610183 | orchestrator | Saturday 20 September 2025 10:11:09 +0000 (0:00:00.221) 0:00:23.135 **** 2025-09-20 10:11:11.610193 | orchestrator | 2025-09-20 10:11:11.610204 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:11:11.610215 | orchestrator | Saturday 20 September 2025 10:11:09 +0000 (0:00:00.062) 0:00:23.197 **** 2025-09-20 10:11:11.610225 | orchestrator | 2025-09-20 10:11:11.610236 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-20 10:11:11.610247 | orchestrator | Saturday 20 September 2025 10:11:09 +0000 (0:00:00.063) 0:00:23.261 **** 2025-09-20 10:11:11.610257 | orchestrator | 2025-09-20 10:11:11.610268 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-20 10:11:11.610279 | orchestrator | Saturday 20 September 2025 10:11:09 +0000 (0:00:00.065) 0:00:23.326 **** 2025-09-20 10:11:11.610289 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-20 10:11:11.610300 | orchestrator | 2025-09-20 10:11:11.610311 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-20 10:11:11.610328 | orchestrator | Saturday 20 September 2025 10:11:10 +0000 (0:00:01.354) 0:00:24.680 **** 2025-09-20 10:11:11.610339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-20 10:11:11.610350 | orchestrator |  "msg": [ 2025-09-20 10:11:11.610361 | orchestrator |  "Validator run completed.", 2025-09-20 10:11:11.610371 | orchestrator |  "You can find the report file here:", 2025-09-20 10:11:11.610382 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-20T10:10:46+00:00-report.json", 2025-09-20 10:11:11.610394 | orchestrator |  "on the following host:", 2025-09-20 10:11:11.610405 | orchestrator |  "testbed-manager" 2025-09-20 10:11:11.610416 | orchestrator |  ] 2025-09-20 10:11:11.610427 | orchestrator | } 2025-09-20 10:11:11.610438 | orchestrator | 2025-09-20 10:11:11.610449 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:11:11.610461 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-20 10:11:11.610473 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 10:11:11.610484 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-20 10:11:11.610495 | orchestrator | 2025-09-20 10:11:11.610513 | orchestrator | 2025-09-20 10:11:11.610524 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:11:11.610534 | orchestrator | Saturday 20 September 2025 10:11:11 +0000 (0:00:00.706) 0:00:25.387 **** 2025-09-20 10:11:11.610545 | orchestrator | =============================================================================== 2025-09-20 10:11:11.610556 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.53s 2025-09-20 10:11:11.610567 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.56s 2025-09-20 10:11:11.610577 | orchestrator | Aggregate test results step one ----------------------------------------- 1.44s 2025-09-20 10:11:11.610588 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2025-09-20 10:11:11.610599 | orchestrator | Create report output directory ------------------------------------------ 1.04s 2025-09-20 10:11:11.610609 | orchestrator | Prepare test data ------------------------------------------------------- 0.75s 2025-09-20 10:11:11.610620 | orchestrator | Print report file information ------------------------------------------- 0.71s 2025-09-20 10:11:11.610631 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-09-20 10:11:11.610642 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.61s 2025-09-20 10:11:11.610652 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.57s 2025-09-20 10:11:11.610663 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.57s 2025-09-20 10:11:11.610674 | orchestrator | Prepare test data ------------------------------------------------------- 0.57s 2025-09-20 10:11:11.610684 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.55s 2025-09-20 10:11:11.610695 | orchestrator | Set test result to passed if count matches ------------------------------ 0.53s 2025-09-20 10:11:11.610706 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.53s 2025-09-20 10:11:11.610717 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-09-20 10:11:11.610764 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2025-09-20 10:11:11.610777 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.47s 2025-09-20 10:11:11.610788 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2025-09-20 10:11:11.610799 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.34s 2025-09-20 10:11:11.809410 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-20 10:11:11.816465 | orchestrator | + set -e 2025-09-20 10:11:11.816535 | orchestrator | + source /opt/manager-vars.sh 2025-09-20 10:11:11.816551 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-20 10:11:11.816562 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-20 10:11:11.816573 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-20 10:11:11.816584 | orchestrator | ++ CEPH_VERSION=reef 2025-09-20 10:11:11.816596 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-20 10:11:11.816607 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-20 10:11:11.816619 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:11:11.816630 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:11:11.816641 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-20 10:11:11.816652 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-20 10:11:11.816663 | orchestrator | ++ export ARA=false 2025-09-20 10:11:11.816674 | orchestrator | ++ ARA=false 2025-09-20 10:11:11.816685 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-20 10:11:11.816696 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-20 10:11:11.816706 | orchestrator | ++ export TEMPEST=false 2025-09-20 10:11:11.816717 | orchestrator | ++ TEMPEST=false 2025-09-20 10:11:11.816757 | orchestrator | ++ export IS_ZUUL=true 2025-09-20 10:11:11.816769 | orchestrator | ++ IS_ZUUL=true 2025-09-20 10:11:11.816780 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 10:11:11.816792 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.35 2025-09-20 10:11:11.817464 | orchestrator | ++ export EXTERNAL_API=false 2025-09-20 10:11:11.817487 | orchestrator | ++ EXTERNAL_API=false 2025-09-20 10:11:11.817498 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-20 10:11:11.817509 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-20 10:11:11.817519 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-20 10:11:11.817558 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-20 10:11:11.817570 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-20 10:11:11.817581 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-20 10:11:11.817592 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-20 10:11:11.817602 | orchestrator | + source /etc/os-release 2025-09-20 10:11:11.817613 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-20 10:11:11.817624 | orchestrator | ++ NAME=Ubuntu 2025-09-20 10:11:11.817635 | orchestrator | ++ VERSION_ID=24.04 2025-09-20 10:11:11.817646 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-20 10:11:11.817656 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-20 10:11:11.817667 | orchestrator | ++ ID=ubuntu 2025-09-20 10:11:11.817678 | orchestrator | ++ ID_LIKE=debian 2025-09-20 10:11:11.817689 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-20 10:11:11.817700 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-20 10:11:11.817723 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-20 10:11:11.817763 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-20 10:11:11.817775 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-20 10:11:11.817786 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-20 10:11:11.817797 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-20 10:11:11.817808 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-20 10:11:11.817821 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-20 10:11:11.829792 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-20 10:11:33.314788 | orchestrator | 2025-09-20 10:11:33.314899 | orchestrator | # Status of Elasticsearch 2025-09-20 10:11:33.314915 | orchestrator | 2025-09-20 10:11:33.314928 | orchestrator | + pushd /opt/configuration/contrib 2025-09-20 10:11:33.314941 | orchestrator | + echo 2025-09-20 10:11:33.314952 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-20 10:11:33.314963 | orchestrator | + echo 2025-09-20 10:11:33.314974 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-20 10:11:33.512338 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-20 10:11:33.512432 | orchestrator | 2025-09-20 10:11:33.512447 | orchestrator | # Status of MariaDB 2025-09-20 10:11:33.512460 | orchestrator | 2025-09-20 10:11:33.512471 | orchestrator | + echo 2025-09-20 10:11:33.512483 | orchestrator | + echo '# Status of MariaDB' 2025-09-20 10:11:33.512494 | orchestrator | + echo 2025-09-20 10:11:33.512505 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-20 10:11:33.512517 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-20 10:11:33.577520 | orchestrator | Reading package lists... 2025-09-20 10:11:33.928457 | orchestrator | Building dependency tree... 2025-09-20 10:11:33.928623 | orchestrator | Reading state information... 2025-09-20 10:11:34.344465 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-20 10:11:34.344560 | orchestrator | bc set to manually installed. 2025-09-20 10:11:34.344575 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-20 10:11:35.074357 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-20 10:11:35.074854 | orchestrator | 2025-09-20 10:11:35.074881 | orchestrator | # Status of Prometheus 2025-09-20 10:11:35.074894 | orchestrator | 2025-09-20 10:11:35.074906 | orchestrator | + echo 2025-09-20 10:11:35.074917 | orchestrator | + echo '# Status of Prometheus' 2025-09-20 10:11:35.074928 | orchestrator | + echo 2025-09-20 10:11:35.074940 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-20 10:11:35.141140 | orchestrator | Unauthorized 2025-09-20 10:11:35.144385 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-20 10:11:35.193532 | orchestrator | Unauthorized 2025-09-20 10:11:35.196864 | orchestrator | 2025-09-20 10:11:35.196904 | orchestrator | # Status of RabbitMQ 2025-09-20 10:11:35.196916 | orchestrator | 2025-09-20 10:11:35.196927 | orchestrator | + echo 2025-09-20 10:11:35.196937 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-20 10:11:35.196946 | orchestrator | + echo 2025-09-20 10:11:35.196957 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-20 10:11:35.731443 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-20 10:11:35.745590 | orchestrator | 2025-09-20 10:11:35.745666 | orchestrator | # Status of Redis 2025-09-20 10:11:35.745681 | orchestrator | 2025-09-20 10:11:35.745694 | orchestrator | + echo 2025-09-20 10:11:35.745705 | orchestrator | + echo '# Status of Redis' 2025-09-20 10:11:35.745717 | orchestrator | + echo 2025-09-20 10:11:35.745730 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-20 10:11:35.751334 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002147s;;;0.000000;10.000000 2025-09-20 10:11:35.751661 | orchestrator | + popd 2025-09-20 10:11:35.752190 | orchestrator | 2025-09-20 10:11:35.752249 | orchestrator | # Create backup of MariaDB database 2025-09-20 10:11:35.752264 | orchestrator | 2025-09-20 10:11:35.752275 | orchestrator | + echo 2025-09-20 10:11:35.752287 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-20 10:11:35.752298 | orchestrator | + echo 2025-09-20 10:11:35.752310 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-20 10:11:37.747239 | orchestrator | 2025-09-20 10:11:37 | INFO  | Task 7018b297-0e42-4d8d-adde-b328555be0e0 (mariadb_backup) was prepared for execution. 2025-09-20 10:11:37.747335 | orchestrator | 2025-09-20 10:11:37 | INFO  | It takes a moment until task 7018b297-0e42-4d8d-adde-b328555be0e0 (mariadb_backup) has been started and output is visible here. 2025-09-20 10:12:19.872168 | orchestrator | 2025-09-20 10:12:19.872274 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-20 10:12:19.872290 | orchestrator | 2025-09-20 10:12:19.872303 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-20 10:12:19.872315 | orchestrator | Saturday 20 September 2025 10:11:41 +0000 (0:00:00.202) 0:00:00.202 **** 2025-09-20 10:12:19.872326 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:12:19.872338 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:12:19.872349 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:12:19.872360 | orchestrator | 2025-09-20 10:12:19.872371 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-20 10:12:19.872382 | orchestrator | Saturday 20 September 2025 10:11:42 +0000 (0:00:00.369) 0:00:00.572 **** 2025-09-20 10:12:19.872393 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-20 10:12:19.872405 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-20 10:12:19.872416 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-20 10:12:19.872427 | orchestrator | 2025-09-20 10:12:19.872438 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-20 10:12:19.872449 | orchestrator | 2025-09-20 10:12:19.872460 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-20 10:12:19.872471 | orchestrator | Saturday 20 September 2025 10:11:42 +0000 (0:00:00.571) 0:00:01.144 **** 2025-09-20 10:12:19.872482 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-20 10:12:19.872493 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-20 10:12:19.872504 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-20 10:12:19.872515 | orchestrator | 2025-09-20 10:12:19.872526 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-20 10:12:19.872537 | orchestrator | Saturday 20 September 2025 10:11:43 +0000 (0:00:00.417) 0:00:01.561 **** 2025-09-20 10:12:19.872548 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-20 10:12:19.872560 | orchestrator | 2025-09-20 10:12:19.872571 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-20 10:12:19.872582 | orchestrator | Saturday 20 September 2025 10:11:43 +0000 (0:00:00.533) 0:00:02.094 **** 2025-09-20 10:12:19.872593 | orchestrator | ok: [testbed-node-0] 2025-09-20 10:12:19.872604 | orchestrator | ok: [testbed-node-1] 2025-09-20 10:12:19.872640 | orchestrator | ok: [testbed-node-2] 2025-09-20 10:12:19.872652 | orchestrator | 2025-09-20 10:12:19.872663 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-20 10:12:19.872673 | orchestrator | Saturday 20 September 2025 10:11:46 +0000 (0:00:03.243) 0:00:05.338 **** 2025-09-20 10:12:19.872684 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-20 10:12:19.872695 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-20 10:12:19.872706 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-20 10:12:19.872716 | orchestrator | mariadb_bootstrap_restart 2025-09-20 10:12:19.872729 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:12:19.872741 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:12:19.872753 | orchestrator | changed: [testbed-node-0] 2025-09-20 10:12:19.872789 | orchestrator | 2025-09-20 10:12:19.872808 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-20 10:12:19.872827 | orchestrator | skipping: no hosts matched 2025-09-20 10:12:19.872845 | orchestrator | 2025-09-20 10:12:19.872865 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-20 10:12:19.872885 | orchestrator | skipping: no hosts matched 2025-09-20 10:12:19.872905 | orchestrator | 2025-09-20 10:12:19.872918 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-20 10:12:19.872928 | orchestrator | skipping: no hosts matched 2025-09-20 10:12:19.872939 | orchestrator | 2025-09-20 10:12:19.872950 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-20 10:12:19.872961 | orchestrator | 2025-09-20 10:12:19.872971 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-20 10:12:19.872982 | orchestrator | Saturday 20 September 2025 10:12:18 +0000 (0:00:31.866) 0:00:37.204 **** 2025-09-20 10:12:19.872992 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:12:19.873003 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:12:19.873013 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:12:19.873024 | orchestrator | 2025-09-20 10:12:19.873035 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-20 10:12:19.873045 | orchestrator | Saturday 20 September 2025 10:12:19 +0000 (0:00:00.303) 0:00:37.508 **** 2025-09-20 10:12:19.873075 | orchestrator | skipping: [testbed-node-0] 2025-09-20 10:12:19.873086 | orchestrator | skipping: [testbed-node-1] 2025-09-20 10:12:19.873097 | orchestrator | skipping: [testbed-node-2] 2025-09-20 10:12:19.873108 | orchestrator | 2025-09-20 10:12:19.873118 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:12:19.873131 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-20 10:12:19.873142 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:12:19.873153 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-20 10:12:19.873164 | orchestrator | 2025-09-20 10:12:19.873175 | orchestrator | 2025-09-20 10:12:19.873186 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:12:19.873197 | orchestrator | Saturday 20 September 2025 10:12:19 +0000 (0:00:00.427) 0:00:37.935 **** 2025-09-20 10:12:19.873207 | orchestrator | =============================================================================== 2025-09-20 10:12:19.873218 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 31.87s 2025-09-20 10:12:19.873246 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.24s 2025-09-20 10:12:19.873257 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-09-20 10:12:19.873268 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.53s 2025-09-20 10:12:19.873288 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2025-09-20 10:12:19.873299 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-09-20 10:12:19.873310 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-09-20 10:12:19.873321 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-09-20 10:12:20.191441 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-20 10:12:20.198199 | orchestrator | + set -e 2025-09-20 10:12:20.198241 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-20 10:12:20.198255 | orchestrator | ++ export INTERACTIVE=false 2025-09-20 10:12:20.198268 | orchestrator | ++ INTERACTIVE=false 2025-09-20 10:12:20.198278 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-20 10:12:20.198289 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-20 10:12:20.198300 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-20 10:12:20.199449 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-20 10:12:20.207361 | orchestrator | 2025-09-20 10:12:20.207394 | orchestrator | # OpenStack endpoints 2025-09-20 10:12:20.207406 | orchestrator | 2025-09-20 10:12:20.207417 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-20 10:12:20.207428 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-20 10:12:20.207439 | orchestrator | + export OS_CLOUD=admin 2025-09-20 10:12:20.207449 | orchestrator | + OS_CLOUD=admin 2025-09-20 10:12:20.207461 | orchestrator | + echo 2025-09-20 10:12:20.207472 | orchestrator | + echo '# OpenStack endpoints' 2025-09-20 10:12:20.207483 | orchestrator | + echo 2025-09-20 10:12:20.207493 | orchestrator | + openstack endpoint list 2025-09-20 10:12:23.858374 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-20 10:12:23.858479 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-20 10:12:23.858495 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-20 10:12:23.858508 | orchestrator | | 049a3c6f983c4eb387481bb24d173d01 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-20 10:12:23.858520 | orchestrator | | 0b4fab16a1fc442e9d9d552516a7cfc3 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-20 10:12:23.858531 | orchestrator | | 0b9858dc03d1458e9b07f4469cc9958a | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-20 10:12:23.858542 | orchestrator | | 0f758ad059034506882e1621f2d6dc05 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-20 10:12:23.858553 | orchestrator | | 0fa199408cc44bb899673c4fe778a75d | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-20 10:12:23.858564 | orchestrator | | 1cc05d934ed04c6aa56c88ff25a2ac7e | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-20 10:12:23.858575 | orchestrator | | 248dbc2d3ea941dcabd18c35a0b784cc | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-20 10:12:23.858587 | orchestrator | | 341b90edbcec418696355e24b00419d6 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-20 10:12:23.858598 | orchestrator | | 49190160cdf04d4ca732011cc5241bd7 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-20 10:12:23.858609 | orchestrator | | 4a85d23c67384d35bc6be3a231664640 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-20 10:12:23.858641 | orchestrator | | 541958bbd5b44f9a89ef9105d84e10e5 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-20 10:12:23.858653 | orchestrator | | 5d86d75e80324db2b2ab0effcd8dc126 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-20 10:12:23.858664 | orchestrator | | 68e07bc5e3a4408e9b94f0c47a6a8061 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-20 10:12:23.858675 | orchestrator | | 72b540c13a4347f7a275805a680afeba | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-20 10:12:23.858687 | orchestrator | | 79bcf431d46d4f50bd16db7316b3002d | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-20 10:12:23.858698 | orchestrator | | 865c8e54fbbc4884ae5ab7ccdab45f76 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-20 10:12:23.858709 | orchestrator | | a0528533903c4be7958d90288bc3a4e4 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-20 10:12:23.858720 | orchestrator | | b590d51fbc7545c8b6bf70f71ef84039 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-20 10:12:23.858731 | orchestrator | | b598bf4f4d3145ceb30d4b1e203b2b53 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-20 10:12:23.858742 | orchestrator | | b6428f65873d45358e498449c57e1cb6 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-20 10:12:23.858800 | orchestrator | | e5936e5c57f8446b9733106bfb8c576e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-20 10:12:23.858820 | orchestrator | | e72500e4ce1943b7a60e650355b2ecf9 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-20 10:12:23.858832 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-20 10:12:24.126128 | orchestrator | 2025-09-20 10:12:24.126218 | orchestrator | # Cinder 2025-09-20 10:12:24.126232 | orchestrator | 2025-09-20 10:12:24.126243 | orchestrator | + echo 2025-09-20 10:12:24.126284 | orchestrator | + echo '# Cinder' 2025-09-20 10:12:24.126296 | orchestrator | + echo 2025-09-20 10:12:24.126306 | orchestrator | + openstack volume service list 2025-09-20 10:12:26.886885 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-20 10:12:26.886995 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-20 10:12:26.887010 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-20 10:12:26.887021 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-20T10:12:21.000000 | 2025-09-20 10:12:26.887032 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-20T10:12:23.000000 | 2025-09-20 10:12:26.887043 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-20T10:12:23.000000 | 2025-09-20 10:12:26.887054 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-20T10:12:25.000000 | 2025-09-20 10:12:26.887065 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-20T10:12:19.000000 | 2025-09-20 10:12:26.887103 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-20T10:12:20.000000 | 2025-09-20 10:12:26.887114 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-20T10:12:17.000000 | 2025-09-20 10:12:26.887125 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-20T10:12:17.000000 | 2025-09-20 10:12:26.887136 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-20T10:12:18.000000 | 2025-09-20 10:12:26.887147 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-20 10:12:27.142264 | orchestrator | 2025-09-20 10:12:27.142358 | orchestrator | # Neutron 2025-09-20 10:12:27.142374 | orchestrator | 2025-09-20 10:12:27.142386 | orchestrator | + echo 2025-09-20 10:12:27.142398 | orchestrator | + echo '# Neutron' 2025-09-20 10:12:27.142410 | orchestrator | + echo 2025-09-20 10:12:27.142421 | orchestrator | + openstack network agent list 2025-09-20 10:12:30.009308 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-20 10:12:30.009405 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-20 10:12:30.009420 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-20 10:12:30.009431 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-20 10:12:30.009443 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-20 10:12:30.009453 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-20 10:12:30.009464 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-20 10:12:30.009475 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-20 10:12:30.009486 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-20 10:12:30.009497 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-20 10:12:30.009507 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-20 10:12:30.009518 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-20 10:12:30.009529 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-20 10:12:30.276008 | orchestrator | + openstack network service provider list 2025-09-20 10:12:32.886610 | orchestrator | +---------------+------+---------+ 2025-09-20 10:12:32.886721 | orchestrator | | Service Type | Name | Default | 2025-09-20 10:12:32.886735 | orchestrator | +---------------+------+---------+ 2025-09-20 10:12:32.886747 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-20 10:12:32.886758 | orchestrator | +---------------+------+---------+ 2025-09-20 10:12:33.170298 | orchestrator | 2025-09-20 10:12:33.170387 | orchestrator | # Nova 2025-09-20 10:12:33.170401 | orchestrator | 2025-09-20 10:12:33.170412 | orchestrator | + echo 2025-09-20 10:12:33.170423 | orchestrator | + echo '# Nova' 2025-09-20 10:12:33.170435 | orchestrator | + echo 2025-09-20 10:12:33.170466 | orchestrator | + openstack compute service list 2025-09-20 10:12:36.157514 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-20 10:12:36.157621 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-20 10:12:36.157637 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-20 10:12:36.157649 | orchestrator | | 3c68547d-b414-43f5-b0c2-fa492c14f0cc | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-20T10:12:32.000000 | 2025-09-20 10:12:36.157660 | orchestrator | | d0b26379-fd94-47a6-be67-0a35f2711d7c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-20T10:12:33.000000 | 2025-09-20 10:12:36.157671 | orchestrator | | 8160b030-3969-402f-a77a-170c8be30d75 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-20T10:12:25.000000 | 2025-09-20 10:12:36.157681 | orchestrator | | ae060b9f-937c-4cb0-837a-630bf4fc3106 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-20T10:12:33.000000 | 2025-09-20 10:12:36.157692 | orchestrator | | df56895f-d043-418b-a431-4ab44cc955b8 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-20T10:12:35.000000 | 2025-09-20 10:12:36.157703 | orchestrator | | 3061cdf6-ce34-45a1-a88a-959a74c8e1b2 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-20T10:12:27.000000 | 2025-09-20 10:12:36.157714 | orchestrator | | 1ee06d59-69e3-44bb-9fd8-25b83d1b2813 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-20T10:12:26.000000 | 2025-09-20 10:12:36.157724 | orchestrator | | d85276e0-90a2-4cf8-ae67-d987111f5e39 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-20T10:12:26.000000 | 2025-09-20 10:12:36.157735 | orchestrator | | d98383bc-93b1-4f48-821f-ef9eb516bd2a | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-20T10:12:27.000000 | 2025-09-20 10:12:36.157746 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-20 10:12:36.447593 | orchestrator | + openstack hypervisor list 2025-09-20 10:12:39.741242 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-20 10:12:39.741344 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-20 10:12:39.741358 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-20 10:12:39.741370 | orchestrator | | 3881cd6b-45c9-4b62-a904-0f8e3ee2a503 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-20 10:12:39.741381 | orchestrator | | b25968ef-9b19-4dd4-87cd-be9f86fd798e | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-20 10:12:39.741392 | orchestrator | | 97d03f72-780b-4c84-937e-da4b0839c7fd | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-20 10:12:39.741403 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-20 10:12:40.034503 | orchestrator | 2025-09-20 10:12:40.034600 | orchestrator | # Run OpenStack test play 2025-09-20 10:12:40.034615 | orchestrator | 2025-09-20 10:12:40.034627 | orchestrator | + echo 2025-09-20 10:12:40.034638 | orchestrator | + echo '# Run OpenStack test play' 2025-09-20 10:12:40.034650 | orchestrator | + echo 2025-09-20 10:12:40.034661 | orchestrator | + osism apply --environment openstack test 2025-09-20 10:12:41.982730 | orchestrator | 2025-09-20 10:12:41 | INFO  | Trying to run play test in environment openstack 2025-09-20 10:12:52.172665 | orchestrator | 2025-09-20 10:12:52 | INFO  | Task 6df763bf-ef1c-4f60-83fe-fb8c78e08c60 (test) was prepared for execution. 2025-09-20 10:12:52.173584 | orchestrator | 2025-09-20 10:12:52 | INFO  | It takes a moment until task 6df763bf-ef1c-4f60-83fe-fb8c78e08c60 (test) has been started and output is visible here. 2025-09-20 10:19:51.646843 | orchestrator | 2025-09-20 10:19:51.647010 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-20 10:19:51.647029 | orchestrator | 2025-09-20 10:19:51.647042 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-20 10:19:51.647078 | orchestrator | Saturday 20 September 2025 10:12:56 +0000 (0:00:00.085) 0:00:00.085 **** 2025-09-20 10:19:51.647090 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647102 | orchestrator | 2025-09-20 10:19:51.647113 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-20 10:19:51.647124 | orchestrator | Saturday 20 September 2025 10:13:00 +0000 (0:00:03.956) 0:00:04.041 **** 2025-09-20 10:19:51.647135 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647145 | orchestrator | 2025-09-20 10:19:51.647156 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-20 10:19:51.647166 | orchestrator | Saturday 20 September 2025 10:13:04 +0000 (0:00:04.248) 0:00:08.290 **** 2025-09-20 10:19:51.647176 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647187 | orchestrator | 2025-09-20 10:19:51.647197 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-20 10:19:51.647208 | orchestrator | Saturday 20 September 2025 10:13:10 +0000 (0:00:06.260) 0:00:14.551 **** 2025-09-20 10:19:51.647218 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647228 | orchestrator | 2025-09-20 10:19:51.647239 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-20 10:19:51.647249 | orchestrator | Saturday 20 September 2025 10:13:14 +0000 (0:00:03.999) 0:00:18.550 **** 2025-09-20 10:19:51.647260 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647271 | orchestrator | 2025-09-20 10:19:51.647282 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-20 10:19:51.647293 | orchestrator | Saturday 20 September 2025 10:13:18 +0000 (0:00:04.189) 0:00:22.740 **** 2025-09-20 10:19:51.647303 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-20 10:19:51.647314 | orchestrator | changed: [localhost] => (item=member) 2025-09-20 10:19:51.647325 | orchestrator | changed: [localhost] => (item=creator) 2025-09-20 10:19:51.647336 | orchestrator | 2025-09-20 10:19:51.647346 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-20 10:19:51.647356 | orchestrator | Saturday 20 September 2025 10:13:31 +0000 (0:00:12.195) 0:00:34.935 **** 2025-09-20 10:19:51.647367 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647377 | orchestrator | 2025-09-20 10:19:51.647390 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-20 10:19:51.647402 | orchestrator | Saturday 20 September 2025 10:13:35 +0000 (0:00:03.998) 0:00:38.934 **** 2025-09-20 10:19:51.647414 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647426 | orchestrator | 2025-09-20 10:19:51.647438 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-20 10:19:51.647449 | orchestrator | Saturday 20 September 2025 10:13:39 +0000 (0:00:04.624) 0:00:43.558 **** 2025-09-20 10:19:51.647461 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647473 | orchestrator | 2025-09-20 10:19:51.647485 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-20 10:19:51.647497 | orchestrator | Saturday 20 September 2025 10:13:43 +0000 (0:00:03.898) 0:00:47.456 **** 2025-09-20 10:19:51.647509 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647521 | orchestrator | 2025-09-20 10:19:51.647533 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-20 10:19:51.647545 | orchestrator | Saturday 20 September 2025 10:13:47 +0000 (0:00:03.653) 0:00:51.110 **** 2025-09-20 10:19:51.647556 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647569 | orchestrator | 2025-09-20 10:19:51.647581 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-20 10:19:51.647592 | orchestrator | Saturday 20 September 2025 10:13:51 +0000 (0:00:04.059) 0:00:55.169 **** 2025-09-20 10:19:51.647604 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647616 | orchestrator | 2025-09-20 10:19:51.647627 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-20 10:19:51.647640 | orchestrator | Saturday 20 September 2025 10:13:55 +0000 (0:00:03.988) 0:00:59.157 **** 2025-09-20 10:19:51.647659 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.647671 | orchestrator | 2025-09-20 10:19:51.647683 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-20 10:19:51.647695 | orchestrator | Saturday 20 September 2025 10:14:11 +0000 (0:00:15.953) 0:01:15.110 **** 2025-09-20 10:19:51.647707 | orchestrator | changed: [localhost] => (item=test) 2025-09-20 10:19:51.647720 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-20 10:19:51.647732 | orchestrator | 2025-09-20 10:19:51.647743 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 10:19:51.647753 | orchestrator | 2025-09-20 10:19:51.647763 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 10:19:51.647774 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-20 10:19:51.647784 | orchestrator | 2025-09-20 10:19:51.647794 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 10:19:51.647805 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-20 10:19:51.647815 | orchestrator | 2025-09-20 10:19:51.647825 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 10:19:51.647836 | orchestrator | 2025-09-20 10:19:51.647846 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-20 10:19:51.647857 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-20 10:19:51.647871 | orchestrator | 2025-09-20 10:19:51.647882 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-20 10:19:51.647893 | orchestrator | Saturday 20 September 2025 10:18:27 +0000 (0:04:16.482) 0:05:31.593 **** 2025-09-20 10:19:51.647904 | orchestrator | changed: [localhost] => (item=test) 2025-09-20 10:19:51.647950 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-20 10:19:51.647963 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-20 10:19:51.647974 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-20 10:19:51.647984 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-20 10:19:51.647995 | orchestrator | 2025-09-20 10:19:51.648006 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-20 10:19:51.648035 | orchestrator | Saturday 20 September 2025 10:18:51 +0000 (0:00:23.492) 0:05:55.086 **** 2025-09-20 10:19:51.648047 | orchestrator | changed: [localhost] => (item=test) 2025-09-20 10:19:51.648058 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-20 10:19:51.648069 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-20 10:19:51.648079 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-20 10:19:51.648090 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-20 10:19:51.648101 | orchestrator | 2025-09-20 10:19:51.648111 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-20 10:19:51.648122 | orchestrator | Saturday 20 September 2025 10:19:25 +0000 (0:00:34.557) 0:06:29.644 **** 2025-09-20 10:19:51.648133 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.648144 | orchestrator | 2025-09-20 10:19:51.648155 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-20 10:19:51.648165 | orchestrator | Saturday 20 September 2025 10:19:32 +0000 (0:00:06.386) 0:06:36.030 **** 2025-09-20 10:19:51.648176 | orchestrator | changed: [localhost] 2025-09-20 10:19:51.648187 | orchestrator | 2025-09-20 10:19:51.648197 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-20 10:19:51.648208 | orchestrator | Saturday 20 September 2025 10:19:45 +0000 (0:00:13.464) 0:06:49.495 **** 2025-09-20 10:19:51.648219 | orchestrator | ok: [localhost] 2025-09-20 10:19:51.648230 | orchestrator | 2025-09-20 10:19:51.648241 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-20 10:19:51.648251 | orchestrator | Saturday 20 September 2025 10:19:51 +0000 (0:00:05.547) 0:06:55.042 **** 2025-09-20 10:19:51.648267 | orchestrator | ok: [localhost] => { 2025-09-20 10:19:51.648278 | orchestrator |  "msg": "192.168.112.110" 2025-09-20 10:19:51.648290 | orchestrator | } 2025-09-20 10:19:51.648301 | orchestrator | 2025-09-20 10:19:51.648312 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-20 10:19:51.648331 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-20 10:19:51.648343 | orchestrator | 2025-09-20 10:19:51.648354 | orchestrator | 2025-09-20 10:19:51.648364 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-20 10:19:51.648375 | orchestrator | Saturday 20 September 2025 10:19:51 +0000 (0:00:00.049) 0:06:55.092 **** 2025-09-20 10:19:51.648386 | orchestrator | =============================================================================== 2025-09-20 10:19:51.648397 | orchestrator | Create test instances ------------------------------------------------- 256.48s 2025-09-20 10:19:51.648408 | orchestrator | Add tag to instances --------------------------------------------------- 34.56s 2025-09-20 10:19:51.648418 | orchestrator | Add metadata to instances ---------------------------------------------- 23.49s 2025-09-20 10:19:51.648429 | orchestrator | Create test network topology ------------------------------------------- 15.95s 2025-09-20 10:19:51.648440 | orchestrator | Attach test volume ----------------------------------------------------- 13.46s 2025-09-20 10:19:51.648451 | orchestrator | Add member roles to user test ------------------------------------------ 12.20s 2025-09-20 10:19:51.648461 | orchestrator | Create test volume ------------------------------------------------------ 6.39s 2025-09-20 10:19:51.648472 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.26s 2025-09-20 10:19:51.648483 | orchestrator | Create floating ip address ---------------------------------------------- 5.55s 2025-09-20 10:19:51.648494 | orchestrator | Create ssh security group ----------------------------------------------- 4.62s 2025-09-20 10:19:51.648505 | orchestrator | Create test-admin user -------------------------------------------------- 4.25s 2025-09-20 10:19:51.648516 | orchestrator | Create test user -------------------------------------------------------- 4.19s 2025-09-20 10:19:51.648526 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.06s 2025-09-20 10:19:51.648537 | orchestrator | Create test project ----------------------------------------------------- 4.00s 2025-09-20 10:19:51.648548 | orchestrator | Create test server group ------------------------------------------------ 4.00s 2025-09-20 10:19:51.648559 | orchestrator | Create test keypair ----------------------------------------------------- 3.99s 2025-09-20 10:19:51.648569 | orchestrator | Create test domain ------------------------------------------------------ 3.96s 2025-09-20 10:19:51.648580 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.90s 2025-09-20 10:19:51.648591 | orchestrator | Create icmp security group ---------------------------------------------- 3.65s 2025-09-20 10:19:51.648602 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-09-20 10:19:52.001482 | orchestrator | + server_list 2025-09-20 10:19:52.001578 | orchestrator | + openstack --os-cloud test server list 2025-09-20 10:19:56.140137 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-20 10:19:56.140243 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-20 10:19:56.140259 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-20 10:19:56.140271 | orchestrator | | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | test-4 | ACTIVE | auto_allocated_network=10.42.0.15, 192.168.112.175 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 10:19:56.140282 | orchestrator | | c4371d31-25d4-47f4-b4b5-37694729c7e7 | test-3 | ACTIVE | auto_allocated_network=10.42.0.61, 192.168.112.179 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 10:19:56.140292 | orchestrator | | 8b8ac489-87a8-4ced-92df-852886e70a68 | test-2 | ACTIVE | auto_allocated_network=10.42.0.29, 192.168.112.174 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 10:19:56.140303 | orchestrator | | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | test-1 | ACTIVE | auto_allocated_network=10.42.0.48, 192.168.112.127 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 10:19:56.140341 | orchestrator | | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | test | ACTIVE | auto_allocated_network=10.42.0.19, 192.168.112.110 | N/A (booted from volume) | SCS-1L-1 | 2025-09-20 10:19:56.140353 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-20 10:19:56.458469 | orchestrator | + openstack --os-cloud test server show test 2025-09-20 10:20:00.080976 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:00.081139 | orchestrator | | Field | Value | 2025-09-20 10:20:00.081169 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:00.081183 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 10:20:00.081195 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 10:20:00.081206 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 10:20:00.081217 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-20 10:20:00.081229 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 10:20:00.081240 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 10:20:00.081293 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 10:20:00.081306 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 10:20:00.081322 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 10:20:00.081334 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 10:20:00.081345 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 10:20:00.081356 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 10:20:00.081367 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 10:20:00.081379 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 10:20:00.081390 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 10:20:00.081409 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T10:14:54.000000 | 2025-09-20 10:20:00.081428 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 10:20:00.081448 | orchestrator | | accessIPv4 | | 2025-09-20 10:20:00.081460 | orchestrator | | accessIPv6 | | 2025-09-20 10:20:00.081471 | orchestrator | | addresses | auto_allocated_network=10.42.0.19, 192.168.112.110 | 2025-09-20 10:20:00.081482 | orchestrator | | config_drive | | 2025-09-20 10:20:00.081494 | orchestrator | | created | 2025-09-20T10:14:19Z | 2025-09-20 10:20:00.081505 | orchestrator | | description | None | 2025-09-20 10:20:00.081516 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 10:20:00.081540 | orchestrator | | hostId | 1fd32844a3bf1a9e0409fb7fa010a16792ee2722ec39685f73fb0ac8 | 2025-09-20 10:20:00.081555 | orchestrator | | host_status | None | 2025-09-20 10:20:00.081577 | orchestrator | | id | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | 2025-09-20 10:20:00.081591 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 10:20:00.081604 | orchestrator | | key_name | test | 2025-09-20 10:20:00.081617 | orchestrator | | locked | False | 2025-09-20 10:20:00.081630 | orchestrator | | locked_reason | None | 2025-09-20 10:20:00.081643 | orchestrator | | name | test | 2025-09-20 10:20:00.081662 | orchestrator | | pinned_availability_zone | None | 2025-09-20 10:20:00.081683 | orchestrator | | progress | 0 | 2025-09-20 10:20:00.081697 | orchestrator | | project_id | 5ac3a6a466a449c7897d4251922b3ad1 | 2025-09-20 10:20:00.081710 | orchestrator | | properties | hostname='test' | 2025-09-20 10:20:00.081730 | orchestrator | | security_groups | name='ssh' | 2025-09-20 10:20:00.081749 | orchestrator | | | name='icmp' | 2025-09-20 10:20:00.081762 | orchestrator | | server_groups | None | 2025-09-20 10:20:00.081777 | orchestrator | | status | ACTIVE | 2025-09-20 10:20:00.081791 | orchestrator | | tags | test | 2025-09-20 10:20:00.081804 | orchestrator | | trusted_image_certificates | None | 2025-09-20 10:20:00.081823 | orchestrator | | updated | 2025-09-20T10:18:32Z | 2025-09-20 10:20:00.081837 | orchestrator | | user_id | e1542337bd3c4710854ef1fb96e5cfbb | 2025-09-20 10:20:00.081850 | orchestrator | | volumes_attached | delete_on_termination='True', id='f7092e7e-986a-4a40-8296-d07807ae8fdb' | 2025-09-20 10:20:00.081863 | orchestrator | | | delete_on_termination='False', id='9be4da22-f221-4a8c-a0a8-20c72f943e54' | 2025-09-20 10:20:00.084409 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:00.396793 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-20 10:20:03.983157 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:03.983270 | orchestrator | | Field | Value | 2025-09-20 10:20:03.983299 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:03.983322 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 10:20:03.983342 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 10:20:03.983385 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 10:20:03.983408 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-20 10:20:03.983428 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 10:20:03.983449 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 10:20:03.983490 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 10:20:03.983522 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 10:20:03.983543 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 10:20:03.983602 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 10:20:03.983623 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 10:20:03.983656 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 10:20:03.983677 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 10:20:03.983700 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 10:20:03.983721 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 10:20:03.983744 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T10:15:49.000000 | 2025-09-20 10:20:03.983777 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 10:20:03.983807 | orchestrator | | accessIPv4 | | 2025-09-20 10:20:03.983832 | orchestrator | | accessIPv6 | | 2025-09-20 10:20:03.983857 | orchestrator | | addresses | auto_allocated_network=10.42.0.48, 192.168.112.127 | 2025-09-20 10:20:03.983892 | orchestrator | | config_drive | | 2025-09-20 10:20:03.983916 | orchestrator | | created | 2025-09-20T10:15:15Z | 2025-09-20 10:20:03.983981 | orchestrator | | description | None | 2025-09-20 10:20:03.984002 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 10:20:03.984023 | orchestrator | | hostId | e49d620acf9294c10b7bd3bd83aeaee14b9029cc5af6fcfa0bb940cd | 2025-09-20 10:20:03.984043 | orchestrator | | host_status | None | 2025-09-20 10:20:03.984074 | orchestrator | | id | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | 2025-09-20 10:20:03.984096 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 10:20:03.984116 | orchestrator | | key_name | test | 2025-09-20 10:20:03.984150 | orchestrator | | locked | False | 2025-09-20 10:20:03.984169 | orchestrator | | locked_reason | None | 2025-09-20 10:20:03.984189 | orchestrator | | name | test-1 | 2025-09-20 10:20:03.984209 | orchestrator | | pinned_availability_zone | None | 2025-09-20 10:20:03.984229 | orchestrator | | progress | 0 | 2025-09-20 10:20:03.984250 | orchestrator | | project_id | 5ac3a6a466a449c7897d4251922b3ad1 | 2025-09-20 10:20:03.984281 | orchestrator | | properties | hostname='test-1' | 2025-09-20 10:20:03.984313 | orchestrator | | security_groups | name='ssh' | 2025-09-20 10:20:03.984341 | orchestrator | | | name='icmp' | 2025-09-20 10:20:03.984363 | orchestrator | | server_groups | None | 2025-09-20 10:20:03.984401 | orchestrator | | status | ACTIVE | 2025-09-20 10:20:03.984423 | orchestrator | | tags | test | 2025-09-20 10:20:03.984444 | orchestrator | | trusted_image_certificates | None | 2025-09-20 10:20:03.984465 | orchestrator | | updated | 2025-09-20T10:18:36Z | 2025-09-20 10:20:03.984485 | orchestrator | | user_id | e1542337bd3c4710854ef1fb96e5cfbb | 2025-09-20 10:20:03.984505 | orchestrator | | volumes_attached | delete_on_termination='True', id='3f28edbb-73cc-4871-b7a9-cacfd5cb2ccc' | 2025-09-20 10:20:03.988255 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:04.303390 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-20 10:20:07.811620 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:07.811710 | orchestrator | | Field | Value | 2025-09-20 10:20:07.811748 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:07.811762 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 10:20:07.811774 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 10:20:07.811787 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 10:20:07.811799 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-20 10:20:07.811812 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 10:20:07.811824 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 10:20:07.811854 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 10:20:07.811871 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 10:20:07.811891 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 10:20:07.811903 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 10:20:07.811915 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 10:20:07.811972 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 10:20:07.811985 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 10:20:07.811996 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 10:20:07.812008 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 10:20:07.812019 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T10:16:46.000000 | 2025-09-20 10:20:07.812038 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 10:20:07.812061 | orchestrator | | accessIPv4 | | 2025-09-20 10:20:07.812072 | orchestrator | | accessIPv6 | | 2025-09-20 10:20:07.812084 | orchestrator | | addresses | auto_allocated_network=10.42.0.29, 192.168.112.174 | 2025-09-20 10:20:07.812095 | orchestrator | | config_drive | | 2025-09-20 10:20:07.812106 | orchestrator | | created | 2025-09-20T10:16:10Z | 2025-09-20 10:20:07.812117 | orchestrator | | description | None | 2025-09-20 10:20:07.812129 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 10:20:07.812140 | orchestrator | | hostId | 927ccf456bda9d759ce3db2af57cf7b34fe0269947c65bf1616ee6a0 | 2025-09-20 10:20:07.812151 | orchestrator | | host_status | None | 2025-09-20 10:20:07.812175 | orchestrator | | id | 8b8ac489-87a8-4ced-92df-852886e70a68 | 2025-09-20 10:20:07.812191 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 10:20:07.812202 | orchestrator | | key_name | test | 2025-09-20 10:20:07.812214 | orchestrator | | locked | False | 2025-09-20 10:20:07.812225 | orchestrator | | locked_reason | None | 2025-09-20 10:20:07.812236 | orchestrator | | name | test-2 | 2025-09-20 10:20:07.812247 | orchestrator | | pinned_availability_zone | None | 2025-09-20 10:20:07.812258 | orchestrator | | progress | 0 | 2025-09-20 10:20:07.812269 | orchestrator | | project_id | 5ac3a6a466a449c7897d4251922b3ad1 | 2025-09-20 10:20:07.812280 | orchestrator | | properties | hostname='test-2' | 2025-09-20 10:20:07.812303 | orchestrator | | security_groups | name='ssh' | 2025-09-20 10:20:07.812315 | orchestrator | | | name='icmp' | 2025-09-20 10:20:07.812327 | orchestrator | | server_groups | None | 2025-09-20 10:20:07.812338 | orchestrator | | status | ACTIVE | 2025-09-20 10:20:07.812349 | orchestrator | | tags | test | 2025-09-20 10:20:07.812360 | orchestrator | | trusted_image_certificates | None | 2025-09-20 10:20:07.812371 | orchestrator | | updated | 2025-09-20T10:18:41Z | 2025-09-20 10:20:07.812382 | orchestrator | | user_id | e1542337bd3c4710854ef1fb96e5cfbb | 2025-09-20 10:20:07.812393 | orchestrator | | volumes_attached | delete_on_termination='True', id='691b88c8-1eb8-43eb-96dd-4423a6351d7c' | 2025-09-20 10:20:07.815805 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:08.164243 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-20 10:20:11.167268 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:11.167335 | orchestrator | | Field | Value | 2025-09-20 10:20:11.167350 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:11.167363 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 10:20:11.167375 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 10:20:11.167386 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 10:20:11.167397 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-20 10:20:11.167409 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 10:20:11.167440 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 10:20:11.167466 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 10:20:11.167479 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 10:20:11.167495 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 10:20:11.167507 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 10:20:11.167518 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 10:20:11.167530 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 10:20:11.167541 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 10:20:11.167553 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 10:20:11.167578 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 10:20:11.167590 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T10:17:31.000000 | 2025-09-20 10:20:11.167609 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 10:20:11.167621 | orchestrator | | accessIPv4 | | 2025-09-20 10:20:11.167637 | orchestrator | | accessIPv6 | | 2025-09-20 10:20:11.167649 | orchestrator | | addresses | auto_allocated_network=10.42.0.61, 192.168.112.179 | 2025-09-20 10:20:11.167660 | orchestrator | | config_drive | | 2025-09-20 10:20:11.167672 | orchestrator | | created | 2025-09-20T10:17:05Z | 2025-09-20 10:20:11.167683 | orchestrator | | description | None | 2025-09-20 10:20:11.167701 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 10:20:11.167712 | orchestrator | | hostId | e49d620acf9294c10b7bd3bd83aeaee14b9029cc5af6fcfa0bb940cd | 2025-09-20 10:20:11.167724 | orchestrator | | host_status | None | 2025-09-20 10:20:11.167742 | orchestrator | | id | c4371d31-25d4-47f4-b4b5-37694729c7e7 | 2025-09-20 10:20:11.167759 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 10:20:11.167770 | orchestrator | | key_name | test | 2025-09-20 10:20:11.167782 | orchestrator | | locked | False | 2025-09-20 10:20:11.167793 | orchestrator | | locked_reason | None | 2025-09-20 10:20:11.167804 | orchestrator | | name | test-3 | 2025-09-20 10:20:11.167816 | orchestrator | | pinned_availability_zone | None | 2025-09-20 10:20:11.167835 | orchestrator | | progress | 0 | 2025-09-20 10:20:11.167849 | orchestrator | | project_id | 5ac3a6a466a449c7897d4251922b3ad1 | 2025-09-20 10:20:11.167862 | orchestrator | | properties | hostname='test-3' | 2025-09-20 10:20:11.167883 | orchestrator | | security_groups | name='ssh' | 2025-09-20 10:20:11.167899 | orchestrator | | | name='icmp' | 2025-09-20 10:20:11.167911 | orchestrator | | server_groups | None | 2025-09-20 10:20:11.167922 | orchestrator | | status | ACTIVE | 2025-09-20 10:20:11.167968 | orchestrator | | tags | test | 2025-09-20 10:20:11.167980 | orchestrator | | trusted_image_certificates | None | 2025-09-20 10:20:11.167998 | orchestrator | | updated | 2025-09-20T10:18:46Z | 2025-09-20 10:20:11.168010 | orchestrator | | user_id | e1542337bd3c4710854ef1fb96e5cfbb | 2025-09-20 10:20:11.168021 | orchestrator | | volumes_attached | delete_on_termination='True', id='b7387457-576d-4661-ae31-4d3b563f0f45' | 2025-09-20 10:20:11.170166 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:11.361875 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-20 10:20:14.519606 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:14.519736 | orchestrator | | Field | Value | 2025-09-20 10:20:14.519755 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:14.519768 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-20 10:20:14.519780 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-20 10:20:14.519812 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-20 10:20:14.519824 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-20 10:20:14.519835 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-20 10:20:14.519847 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-20 10:20:14.519875 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-20 10:20:14.519888 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-20 10:20:14.519900 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-20 10:20:14.519911 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-20 10:20:14.519922 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-20 10:20:14.519985 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-20 10:20:14.519998 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-20 10:20:14.520010 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-20 10:20:14.520021 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-20 10:20:14.520454 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-20T10:18:14.000000 | 2025-09-20 10:20:14.520480 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-20 10:20:14.520493 | orchestrator | | accessIPv4 | | 2025-09-20 10:20:14.520504 | orchestrator | | accessIPv6 | | 2025-09-20 10:20:14.520515 | orchestrator | | addresses | auto_allocated_network=10.42.0.15, 192.168.112.175 | 2025-09-20 10:20:14.520534 | orchestrator | | config_drive | | 2025-09-20 10:20:14.520546 | orchestrator | | created | 2025-09-20T10:17:49Z | 2025-09-20 10:20:14.520557 | orchestrator | | description | None | 2025-09-20 10:20:14.520568 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-20 10:20:14.520579 | orchestrator | | hostId | 1fd32844a3bf1a9e0409fb7fa010a16792ee2722ec39685f73fb0ac8 | 2025-09-20 10:20:14.520595 | orchestrator | | host_status | None | 2025-09-20 10:20:14.520614 | orchestrator | | id | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | 2025-09-20 10:20:14.520626 | orchestrator | | image | N/A (booted from volume) | 2025-09-20 10:20:14.520637 | orchestrator | | key_name | test | 2025-09-20 10:20:14.520648 | orchestrator | | locked | False | 2025-09-20 10:20:14.520665 | orchestrator | | locked_reason | None | 2025-09-20 10:20:14.520677 | orchestrator | | name | test-4 | 2025-09-20 10:20:14.520688 | orchestrator | | pinned_availability_zone | None | 2025-09-20 10:20:14.520699 | orchestrator | | progress | 0 | 2025-09-20 10:20:14.520710 | orchestrator | | project_id | 5ac3a6a466a449c7897d4251922b3ad1 | 2025-09-20 10:20:14.520725 | orchestrator | | properties | hostname='test-4' | 2025-09-20 10:20:14.520744 | orchestrator | | security_groups | name='ssh' | 2025-09-20 10:20:14.520756 | orchestrator | | | name='icmp' | 2025-09-20 10:20:14.520768 | orchestrator | | server_groups | None | 2025-09-20 10:20:14.520786 | orchestrator | | status | ACTIVE | 2025-09-20 10:20:14.520797 | orchestrator | | tags | test | 2025-09-20 10:20:14.520808 | orchestrator | | trusted_image_certificates | None | 2025-09-20 10:20:14.520819 | orchestrator | | updated | 2025-09-20T10:18:51Z | 2025-09-20 10:20:14.520831 | orchestrator | | user_id | e1542337bd3c4710854ef1fb96e5cfbb | 2025-09-20 10:20:14.520842 | orchestrator | | volumes_attached | delete_on_termination='True', id='ccccbea9-5396-4ebc-a19d-c7e7655ad6f3' | 2025-09-20 10:20:14.525398 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-20 10:20:14.850314 | orchestrator | + server_ping 2025-09-20 10:20:14.850949 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 10:20:14.852437 | orchestrator | ++ tr -d '\r' 2025-09-20 10:20:17.843719 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:20:17.843811 | orchestrator | + ping -c3 192.168.112.174 2025-09-20 10:20:17.860301 | orchestrator | PING 192.168.112.174 (192.168.112.174) 56(84) bytes of data. 2025-09-20 10:20:17.860337 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=1 ttl=63 time=8.12 ms 2025-09-20 10:20:18.856301 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=2 ttl=63 time=2.23 ms 2025-09-20 10:20:19.856201 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=3 ttl=63 time=1.71 ms 2025-09-20 10:20:19.856324 | orchestrator | 2025-09-20 10:20:19.856341 | orchestrator | --- 192.168.112.174 ping statistics --- 2025-09-20 10:20:19.856353 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 10:20:19.856364 | orchestrator | rtt min/avg/max/mdev = 1.709/4.017/8.116/2.905 ms 2025-09-20 10:20:19.858219 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:20:19.858246 | orchestrator | + ping -c3 192.168.112.127 2025-09-20 10:20:19.868727 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-09-20 10:20:19.868750 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=6.91 ms 2025-09-20 10:20:20.865853 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.65 ms 2025-09-20 10:20:21.868159 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.96 ms 2025-09-20 10:20:21.868252 | orchestrator | 2025-09-20 10:20:21.868267 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-09-20 10:20:21.868279 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:20:21.868290 | orchestrator | rtt min/avg/max/mdev = 1.956/3.838/6.913/2.192 ms 2025-09-20 10:20:21.868313 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:20:21.868325 | orchestrator | + ping -c3 192.168.112.110 2025-09-20 10:20:21.882234 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-09-20 10:20:21.882302 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=8.06 ms 2025-09-20 10:20:22.877861 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.21 ms 2025-09-20 10:20:23.880689 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.17 ms 2025-09-20 10:20:23.880797 | orchestrator | 2025-09-20 10:20:23.880814 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-09-20 10:20:23.880826 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:20:23.880838 | orchestrator | rtt min/avg/max/mdev = 2.168/4.145/8.055/2.764 ms 2025-09-20 10:20:23.880849 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:20:23.880861 | orchestrator | + ping -c3 192.168.112.175 2025-09-20 10:20:23.895037 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-20 10:20:23.895113 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=8.71 ms 2025-09-20 10:20:24.890304 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.09 ms 2025-09-20 10:20:25.891363 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.78 ms 2025-09-20 10:20:25.891469 | orchestrator | 2025-09-20 10:20:25.891485 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-20 10:20:25.891498 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:20:25.891509 | orchestrator | rtt min/avg/max/mdev = 1.776/4.192/8.711/3.197 ms 2025-09-20 10:20:25.892130 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:20:25.892155 | orchestrator | + ping -c3 192.168.112.179 2025-09-20 10:20:25.904394 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-20 10:20:25.904437 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.73 ms 2025-09-20 10:20:26.900340 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.40 ms 2025-09-20 10:20:27.901743 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.99 ms 2025-09-20 10:20:27.901837 | orchestrator | 2025-09-20 10:20:27.901851 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-20 10:20:27.901864 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:20:27.901875 | orchestrator | rtt min/avg/max/mdev = 1.987/4.038/7.730/2.615 ms 2025-09-20 10:20:27.902155 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-20 10:20:27.902179 | orchestrator | + compute_list 2025-09-20 10:20:27.902191 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 10:20:31.248574 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:20:31.248691 | orchestrator | | ID | Name | Status | 2025-09-20 10:20:31.248706 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:20:31.248748 | orchestrator | | 8b8ac489-87a8-4ced-92df-852886e70a68 | test-2 | ACTIVE | 2025-09-20 10:20:31.248760 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:20:31.496461 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 10:20:34.537386 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:20:34.537501 | orchestrator | | ID | Name | Status | 2025-09-20 10:20:34.537516 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:20:34.537528 | orchestrator | | c4371d31-25d4-47f4-b4b5-37694729c7e7 | test-3 | ACTIVE | 2025-09-20 10:20:34.537540 | orchestrator | | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | test-1 | ACTIVE | 2025-09-20 10:20:34.537550 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:20:34.915833 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 10:20:38.468817 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:20:38.468990 | orchestrator | | ID | Name | Status | 2025-09-20 10:20:38.469007 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:20:38.469019 | orchestrator | | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | test-4 | ACTIVE | 2025-09-20 10:20:38.469030 | orchestrator | | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | test | ACTIVE | 2025-09-20 10:20:38.469041 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:20:38.827479 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-09-20 10:20:42.650706 | orchestrator | 2025-09-20 10:20:42 | INFO  | Live migrating server c4371d31-25d4-47f4-b4b5-37694729c7e7 2025-09-20 10:20:55.704761 | orchestrator | 2025-09-20 10:20:55 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:20:58.066667 | orchestrator | 2025-09-20 10:20:58 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:00.455593 | orchestrator | 2025-09-20 10:21:00 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:02.741529 | orchestrator | 2025-09-20 10:21:02 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:05.244015 | orchestrator | 2025-09-20 10:21:05 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:07.645184 | orchestrator | 2025-09-20 10:21:07 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:10.092063 | orchestrator | 2025-09-20 10:21:10 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:12.385369 | orchestrator | 2025-09-20 10:21:12 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:21:14.702200 | orchestrator | 2025-09-20 10:21:14 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) completed with status ACTIVE 2025-09-20 10:21:14.702311 | orchestrator | 2025-09-20 10:21:14 | INFO  | Live migrating server 050aef8b-1719-4a79-91d8-0464c6c04ca8 2025-09-20 10:21:27.272381 | orchestrator | 2025-09-20 10:21:27 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:29.660107 | orchestrator | 2025-09-20 10:21:29 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:32.026893 | orchestrator | 2025-09-20 10:21:32 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:34.322778 | orchestrator | 2025-09-20 10:21:34 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:36.672619 | orchestrator | 2025-09-20 10:21:36 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:38.983612 | orchestrator | 2025-09-20 10:21:38 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:41.264402 | orchestrator | 2025-09-20 10:21:41 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:43.528526 | orchestrator | 2025-09-20 10:21:43 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:21:45.850112 | orchestrator | 2025-09-20 10:21:45 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) completed with status ACTIVE 2025-09-20 10:21:46.200584 | orchestrator | + compute_list 2025-09-20 10:21:46.200679 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 10:21:49.339790 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:21:49.339897 | orchestrator | | ID | Name | Status | 2025-09-20 10:21:49.339913 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:21:49.339925 | orchestrator | | c4371d31-25d4-47f4-b4b5-37694729c7e7 | test-3 | ACTIVE | 2025-09-20 10:21:49.339936 | orchestrator | | 8b8ac489-87a8-4ced-92df-852886e70a68 | test-2 | ACTIVE | 2025-09-20 10:21:49.339947 | orchestrator | | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | test-1 | ACTIVE | 2025-09-20 10:21:49.339958 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:21:49.658272 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 10:21:52.460059 | orchestrator | +------+--------+----------+ 2025-09-20 10:21:52.460161 | orchestrator | | ID | Name | Status | 2025-09-20 10:21:52.460176 | orchestrator | |------+--------+----------| 2025-09-20 10:21:52.460187 | orchestrator | +------+--------+----------+ 2025-09-20 10:21:52.759888 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 10:21:55.720872 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:21:55.721031 | orchestrator | | ID | Name | Status | 2025-09-20 10:21:55.721048 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:21:55.721085 | orchestrator | | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | test-4 | ACTIVE | 2025-09-20 10:21:55.721098 | orchestrator | | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | test | ACTIVE | 2025-09-20 10:21:55.721109 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:21:56.081684 | orchestrator | + server_ping 2025-09-20 10:21:56.082847 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 10:21:56.083113 | orchestrator | ++ tr -d '\r' 2025-09-20 10:21:59.035393 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:21:59.035516 | orchestrator | + ping -c3 192.168.112.174 2025-09-20 10:21:59.043883 | orchestrator | PING 192.168.112.174 (192.168.112.174) 56(84) bytes of data. 2025-09-20 10:21:59.043924 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=1 ttl=63 time=6.12 ms 2025-09-20 10:22:00.041863 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=2 ttl=63 time=2.30 ms 2025-09-20 10:22:01.042959 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=3 ttl=63 time=1.87 ms 2025-09-20 10:22:01.043085 | orchestrator | 2025-09-20 10:22:01.043102 | orchestrator | --- 192.168.112.174 ping statistics --- 2025-09-20 10:22:01.043114 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:22:01.043126 | orchestrator | rtt min/avg/max/mdev = 1.866/3.431/6.123/1.911 ms 2025-09-20 10:22:01.044105 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:22:01.044130 | orchestrator | + ping -c3 192.168.112.127 2025-09-20 10:22:01.055335 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-09-20 10:22:01.055360 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.83 ms 2025-09-20 10:22:02.051787 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.87 ms 2025-09-20 10:22:03.051917 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.85 ms 2025-09-20 10:22:03.052088 | orchestrator | 2025-09-20 10:22:03.052106 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-09-20 10:22:03.052118 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 10:22:03.052129 | orchestrator | rtt min/avg/max/mdev = 1.854/4.185/7.834/2.612 ms 2025-09-20 10:22:03.052558 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:22:03.052583 | orchestrator | + ping -c3 192.168.112.110 2025-09-20 10:22:03.063792 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-09-20 10:22:03.063815 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=5.93 ms 2025-09-20 10:22:04.062347 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.75 ms 2025-09-20 10:22:05.062931 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.52 ms 2025-09-20 10:22:05.063055 | orchestrator | 2025-09-20 10:22:05.063072 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-09-20 10:22:05.063084 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:22:05.063095 | orchestrator | rtt min/avg/max/mdev = 1.519/3.397/5.925/1.856 ms 2025-09-20 10:22:05.063119 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:22:05.063132 | orchestrator | + ping -c3 192.168.112.175 2025-09-20 10:22:05.071594 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-20 10:22:05.071634 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=5.60 ms 2025-09-20 10:22:06.070147 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.41 ms 2025-09-20 10:22:07.071959 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.94 ms 2025-09-20 10:22:07.072071 | orchestrator | 2025-09-20 10:22:07.072086 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-20 10:22:07.072097 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 10:22:07.072108 | orchestrator | rtt min/avg/max/mdev = 1.935/3.315/5.597/1.625 ms 2025-09-20 10:22:07.072119 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:22:07.072130 | orchestrator | + ping -c3 192.168.112.179 2025-09-20 10:22:07.086229 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-20 10:22:07.086279 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=10.6 ms 2025-09-20 10:22:08.081151 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.78 ms 2025-09-20 10:22:09.083329 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.81 ms 2025-09-20 10:22:09.083643 | orchestrator | 2025-09-20 10:22:09.083673 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-20 10:22:09.083686 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2005ms 2025-09-20 10:22:09.083698 | orchestrator | rtt min/avg/max/mdev = 2.784/5.404/10.617/3.685 ms 2025-09-20 10:22:09.084051 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-09-20 10:22:11.906884 | orchestrator | 2025-09-20 10:22:11 | INFO  | Live migrating server 1a8d026f-e2b8-4cac-ade3-bf901e93700d 2025-09-20 10:22:23.843615 | orchestrator | 2025-09-20 10:22:23 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:26.230424 | orchestrator | 2025-09-20 10:22:26 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:28.602743 | orchestrator | 2025-09-20 10:22:28 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:30.965709 | orchestrator | 2025-09-20 10:22:30 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:33.233974 | orchestrator | 2025-09-20 10:22:33 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:35.604802 | orchestrator | 2025-09-20 10:22:35 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:37.914786 | orchestrator | 2025-09-20 10:22:37 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:40.255524 | orchestrator | 2025-09-20 10:22:40 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:42.562244 | orchestrator | 2025-09-20 10:22:42 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:22:44.797641 | orchestrator | 2025-09-20 10:22:44 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) completed with status ACTIVE 2025-09-20 10:22:44.797801 | orchestrator | 2025-09-20 10:22:44 | INFO  | Live migrating server 944d89d7-2d5d-4298-96e8-8a1d6cf98856 2025-09-20 10:22:55.779063 | orchestrator | 2025-09-20 10:22:55 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:22:58.147760 | orchestrator | 2025-09-20 10:22:58 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:00.488133 | orchestrator | 2025-09-20 10:23:00 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:02.828836 | orchestrator | 2025-09-20 10:23:02 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:05.113542 | orchestrator | 2025-09-20 10:23:05 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:07.511724 | orchestrator | 2025-09-20 10:23:07 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:09.798347 | orchestrator | 2025-09-20 10:23:09 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:12.106898 | orchestrator | 2025-09-20 10:23:12 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:14.411444 | orchestrator | 2025-09-20 10:23:14 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:16.690689 | orchestrator | 2025-09-20 10:23:16 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:23:19.014447 | orchestrator | 2025-09-20 10:23:19 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) completed with status ACTIVE 2025-09-20 10:23:19.444211 | orchestrator | + compute_list 2025-09-20 10:23:19.444332 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 10:23:22.638138 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:23:22.638304 | orchestrator | | ID | Name | Status | 2025-09-20 10:23:22.638319 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:23:22.638330 | orchestrator | | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | test-4 | ACTIVE | 2025-09-20 10:23:22.638340 | orchestrator | | c4371d31-25d4-47f4-b4b5-37694729c7e7 | test-3 | ACTIVE | 2025-09-20 10:23:22.638350 | orchestrator | | 8b8ac489-87a8-4ced-92df-852886e70a68 | test-2 | ACTIVE | 2025-09-20 10:23:22.638360 | orchestrator | | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | test-1 | ACTIVE | 2025-09-20 10:23:22.638387 | orchestrator | | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | test | ACTIVE | 2025-09-20 10:23:22.638407 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:23:22.938320 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 10:23:25.762460 | orchestrator | +------+--------+----------+ 2025-09-20 10:23:25.762590 | orchestrator | | ID | Name | Status | 2025-09-20 10:23:25.762602 | orchestrator | |------+--------+----------| 2025-09-20 10:23:25.762611 | orchestrator | +------+--------+----------+ 2025-09-20 10:23:26.000114 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 10:23:28.479203 | orchestrator | +------+--------+----------+ 2025-09-20 10:23:28.479375 | orchestrator | | ID | Name | Status | 2025-09-20 10:23:28.479391 | orchestrator | |------+--------+----------| 2025-09-20 10:23:28.479403 | orchestrator | +------+--------+----------+ 2025-09-20 10:23:28.696127 | orchestrator | + server_ping 2025-09-20 10:23:28.696532 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 10:23:28.697631 | orchestrator | ++ tr -d '\r' 2025-09-20 10:23:31.364518 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:23:31.366189 | orchestrator | + ping -c3 192.168.112.174 2025-09-20 10:23:31.373801 | orchestrator | PING 192.168.112.174 (192.168.112.174) 56(84) bytes of data. 2025-09-20 10:23:31.373829 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=1 ttl=63 time=7.32 ms 2025-09-20 10:23:32.371288 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=2 ttl=63 time=2.60 ms 2025-09-20 10:23:33.372589 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=3 ttl=63 time=1.86 ms 2025-09-20 10:23:33.372720 | orchestrator | 2025-09-20 10:23:33.372735 | orchestrator | --- 192.168.112.174 ping statistics --- 2025-09-20 10:23:33.372749 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 10:23:33.372761 | orchestrator | rtt min/avg/max/mdev = 1.856/3.924/7.322/2.421 ms 2025-09-20 10:23:33.373143 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:23:33.373168 | orchestrator | + ping -c3 192.168.112.127 2025-09-20 10:23:33.388207 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-09-20 10:23:33.388238 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=9.73 ms 2025-09-20 10:23:34.382552 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.83 ms 2025-09-20 10:23:35.382869 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.92 ms 2025-09-20 10:23:35.382995 | orchestrator | 2025-09-20 10:23:35.383010 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-09-20 10:23:35.383024 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 10:23:35.383035 | orchestrator | rtt min/avg/max/mdev = 1.919/4.824/9.725/3.485 ms 2025-09-20 10:23:35.383239 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:23:35.383261 | orchestrator | + ping -c3 192.168.112.110 2025-09-20 10:23:35.393950 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-09-20 10:23:35.394145 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=6.15 ms 2025-09-20 10:23:36.391425 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.21 ms 2025-09-20 10:23:37.392323 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.58 ms 2025-09-20 10:23:37.392587 | orchestrator | 2025-09-20 10:23:37.392616 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-09-20 10:23:37.392629 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:23:37.392641 | orchestrator | rtt min/avg/max/mdev = 1.582/3.316/6.153/2.022 ms 2025-09-20 10:23:37.392665 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:23:37.392678 | orchestrator | + ping -c3 192.168.112.175 2025-09-20 10:23:37.402948 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-20 10:23:37.402992 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=5.36 ms 2025-09-20 10:23:38.402254 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.81 ms 2025-09-20 10:23:39.404084 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.77 ms 2025-09-20 10:23:39.404234 | orchestrator | 2025-09-20 10:23:39.404251 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-20 10:23:39.404264 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 10:23:39.404276 | orchestrator | rtt min/avg/max/mdev = 1.768/3.311/5.361/1.509 ms 2025-09-20 10:23:39.404288 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:23:39.404300 | orchestrator | + ping -c3 192.168.112.179 2025-09-20 10:23:39.419703 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-20 10:23:39.419740 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=11.2 ms 2025-09-20 10:23:40.413122 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.64 ms 2025-09-20 10:23:41.414493 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.11 ms 2025-09-20 10:23:41.414597 | orchestrator | 2025-09-20 10:23:41.414614 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-20 10:23:41.414626 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:23:41.414638 | orchestrator | rtt min/avg/max/mdev = 2.112/5.322/11.217/4.173 ms 2025-09-20 10:23:41.415247 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-09-20 10:23:44.879662 | orchestrator | 2025-09-20 10:23:44 | INFO  | Live migrating server 1a8d026f-e2b8-4cac-ade3-bf901e93700d 2025-09-20 10:23:56.549880 | orchestrator | 2025-09-20 10:23:56 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:23:58.908831 | orchestrator | 2025-09-20 10:23:58 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:01.206359 | orchestrator | 2025-09-20 10:24:01 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:03.569369 | orchestrator | 2025-09-20 10:24:03 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:05.804849 | orchestrator | 2025-09-20 10:24:05 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:08.173260 | orchestrator | 2025-09-20 10:24:08 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:10.567476 | orchestrator | 2025-09-20 10:24:10 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:12.971941 | orchestrator | 2025-09-20 10:24:12 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:24:15.210419 | orchestrator | 2025-09-20 10:24:15 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) completed with status ACTIVE 2025-09-20 10:24:15.210530 | orchestrator | 2025-09-20 10:24:15 | INFO  | Live migrating server c4371d31-25d4-47f4-b4b5-37694729c7e7 2025-09-20 10:24:27.250839 | orchestrator | 2025-09-20 10:24:27 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:29.546975 | orchestrator | 2025-09-20 10:24:29 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:31.879422 | orchestrator | 2025-09-20 10:24:31 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:34.142181 | orchestrator | 2025-09-20 10:24:34 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:36.466111 | orchestrator | 2025-09-20 10:24:36 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:38.733080 | orchestrator | 2025-09-20 10:24:38 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:41.069121 | orchestrator | 2025-09-20 10:24:41 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:43.417369 | orchestrator | 2025-09-20 10:24:43 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:24:45.706156 | orchestrator | 2025-09-20 10:24:45 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) completed with status ACTIVE 2025-09-20 10:24:45.706287 | orchestrator | 2025-09-20 10:24:45 | INFO  | Live migrating server 8b8ac489-87a8-4ced-92df-852886e70a68 2025-09-20 10:24:55.379438 | orchestrator | 2025-09-20 10:24:55 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:24:57.912916 | orchestrator | 2025-09-20 10:24:57 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:00.250228 | orchestrator | 2025-09-20 10:25:00 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:02.540910 | orchestrator | 2025-09-20 10:25:02 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:04.816674 | orchestrator | 2025-09-20 10:25:04 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:07.316845 | orchestrator | 2025-09-20 10:25:07 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:09.595546 | orchestrator | 2025-09-20 10:25:09 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:11.844170 | orchestrator | 2025-09-20 10:25:11 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:25:14.107284 | orchestrator | 2025-09-20 10:25:14 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) completed with status ACTIVE 2025-09-20 10:25:14.107419 | orchestrator | 2025-09-20 10:25:14 | INFO  | Live migrating server 050aef8b-1719-4a79-91d8-0464c6c04ca8 2025-09-20 10:25:25.394254 | orchestrator | 2025-09-20 10:25:25 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:27.741596 | orchestrator | 2025-09-20 10:25:27 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:30.091182 | orchestrator | 2025-09-20 10:25:30 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:32.638448 | orchestrator | 2025-09-20 10:25:32 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:34.903491 | orchestrator | 2025-09-20 10:25:34 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:37.181765 | orchestrator | 2025-09-20 10:25:37 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:39.434656 | orchestrator | 2025-09-20 10:25:39 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:41.742333 | orchestrator | 2025-09-20 10:25:41 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:25:44.035487 | orchestrator | 2025-09-20 10:25:44 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) completed with status ACTIVE 2025-09-20 10:25:44.035615 | orchestrator | 2025-09-20 10:25:44 | INFO  | Live migrating server 944d89d7-2d5d-4298-96e8-8a1d6cf98856 2025-09-20 10:25:54.871767 | orchestrator | 2025-09-20 10:25:54 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:25:57.230705 | orchestrator | 2025-09-20 10:25:57 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:25:59.597977 | orchestrator | 2025-09-20 10:25:59 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:02.111558 | orchestrator | 2025-09-20 10:26:02 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:04.465118 | orchestrator | 2025-09-20 10:26:04 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:06.799557 | orchestrator | 2025-09-20 10:26:06 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:09.154615 | orchestrator | 2025-09-20 10:26:09 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:11.485161 | orchestrator | 2025-09-20 10:26:11 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:13.780007 | orchestrator | 2025-09-20 10:26:13 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:16.031366 | orchestrator | 2025-09-20 10:26:16 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:26:18.378379 | orchestrator | 2025-09-20 10:26:18 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) completed with status ACTIVE 2025-09-20 10:26:18.677580 | orchestrator | + compute_list 2025-09-20 10:26:18.677674 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 10:26:21.559343 | orchestrator | +------+--------+----------+ 2025-09-20 10:26:21.559437 | orchestrator | | ID | Name | Status | 2025-09-20 10:26:21.559453 | orchestrator | |------+--------+----------| 2025-09-20 10:26:21.559464 | orchestrator | +------+--------+----------+ 2025-09-20 10:26:21.914620 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 10:26:25.134558 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:26:25.134646 | orchestrator | | ID | Name | Status | 2025-09-20 10:26:25.134660 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:26:25.134671 | orchestrator | | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | test-4 | ACTIVE | 2025-09-20 10:26:25.134682 | orchestrator | | c4371d31-25d4-47f4-b4b5-37694729c7e7 | test-3 | ACTIVE | 2025-09-20 10:26:25.134693 | orchestrator | | 8b8ac489-87a8-4ced-92df-852886e70a68 | test-2 | ACTIVE | 2025-09-20 10:26:25.134704 | orchestrator | | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | test-1 | ACTIVE | 2025-09-20 10:26:25.134714 | orchestrator | | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | test | ACTIVE | 2025-09-20 10:26:25.134725 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:26:25.501506 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 10:26:28.415513 | orchestrator | +------+--------+----------+ 2025-09-20 10:26:28.415607 | orchestrator | | ID | Name | Status | 2025-09-20 10:26:28.415622 | orchestrator | |------+--------+----------| 2025-09-20 10:26:28.415633 | orchestrator | +------+--------+----------+ 2025-09-20 10:26:28.849686 | orchestrator | + server_ping 2025-09-20 10:26:28.851001 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 10:26:28.851152 | orchestrator | ++ tr -d '\r' 2025-09-20 10:26:31.962439 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:26:31.962544 | orchestrator | + ping -c3 192.168.112.174 2025-09-20 10:26:31.970922 | orchestrator | PING 192.168.112.174 (192.168.112.174) 56(84) bytes of data. 2025-09-20 10:26:31.970946 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=1 ttl=63 time=6.26 ms 2025-09-20 10:26:32.969590 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=2 ttl=63 time=2.90 ms 2025-09-20 10:26:33.970694 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=3 ttl=63 time=2.50 ms 2025-09-20 10:26:33.970809 | orchestrator | 2025-09-20 10:26:33.970826 | orchestrator | --- 192.168.112.174 ping statistics --- 2025-09-20 10:26:33.970838 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:26:33.970849 | orchestrator | rtt min/avg/max/mdev = 2.495/3.886/6.262/1.687 ms 2025-09-20 10:26:33.971085 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:26:33.971108 | orchestrator | + ping -c3 192.168.112.127 2025-09-20 10:26:33.987649 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-09-20 10:26:33.987713 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=11.4 ms 2025-09-20 10:26:34.981284 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.83 ms 2025-09-20 10:26:35.980754 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.03 ms 2025-09-20 10:26:35.980855 | orchestrator | 2025-09-20 10:26:35.980871 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-09-20 10:26:35.980882 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-20 10:26:35.980893 | orchestrator | rtt min/avg/max/mdev = 2.026/5.409/11.371/4.228 ms 2025-09-20 10:26:35.981268 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:26:35.981289 | orchestrator | + ping -c3 192.168.112.110 2025-09-20 10:26:35.996281 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-09-20 10:26:35.996304 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=9.94 ms 2025-09-20 10:26:36.990928 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.83 ms 2025-09-20 10:26:37.991285 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.79 ms 2025-09-20 10:26:37.991383 | orchestrator | 2025-09-20 10:26:37.991398 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-09-20 10:26:37.991410 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:26:37.991421 | orchestrator | rtt min/avg/max/mdev = 1.791/4.853/9.941/3.622 ms 2025-09-20 10:26:37.991892 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:26:37.991916 | orchestrator | + ping -c3 192.168.112.175 2025-09-20 10:26:38.001104 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-20 10:26:38.001127 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.18 ms 2025-09-20 10:26:38.999592 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.64 ms 2025-09-20 10:26:40.001744 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=2.11 ms 2025-09-20 10:26:40.001845 | orchestrator | 2025-09-20 10:26:40.001861 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-20 10:26:40.001875 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 10:26:40.001887 | orchestrator | rtt min/avg/max/mdev = 2.105/3.643/6.182/1.808 ms 2025-09-20 10:26:40.002238 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:26:40.002262 | orchestrator | + ping -c3 192.168.112.179 2025-09-20 10:26:40.019342 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-20 10:26:40.019399 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=12.0 ms 2025-09-20 10:26:41.010787 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.87 ms 2025-09-20 10:26:42.010490 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.26 ms 2025-09-20 10:26:42.010595 | orchestrator | 2025-09-20 10:26:42.010611 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-20 10:26:42.010624 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2025-09-20 10:26:42.010635 | orchestrator | rtt min/avg/max/mdev = 2.263/5.722/12.039/4.473 ms 2025-09-20 10:26:42.011716 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-09-20 10:26:45.254497 | orchestrator | 2025-09-20 10:26:45 | INFO  | Live migrating server 1a8d026f-e2b8-4cac-ade3-bf901e93700d 2025-09-20 10:26:54.948831 | orchestrator | 2025-09-20 10:26:54 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:26:57.300152 | orchestrator | 2025-09-20 10:26:57 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:26:59.608145 | orchestrator | 2025-09-20 10:26:59 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:27:01.971749 | orchestrator | 2025-09-20 10:27:01 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:27:04.210817 | orchestrator | 2025-09-20 10:27:04 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:27:06.508499 | orchestrator | 2025-09-20 10:27:06 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:27:08.897403 | orchestrator | 2025-09-20 10:27:08 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:27:11.265821 | orchestrator | 2025-09-20 10:27:11 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) is still in progress 2025-09-20 10:27:13.728806 | orchestrator | 2025-09-20 10:27:13 | INFO  | Live migration of 1a8d026f-e2b8-4cac-ade3-bf901e93700d (test-4) completed with status ACTIVE 2025-09-20 10:27:13.728918 | orchestrator | 2025-09-20 10:27:13 | INFO  | Live migrating server c4371d31-25d4-47f4-b4b5-37694729c7e7 2025-09-20 10:27:24.737984 | orchestrator | 2025-09-20 10:27:24 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:27.067612 | orchestrator | 2025-09-20 10:27:27 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:29.465434 | orchestrator | 2025-09-20 10:27:29 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:31.692931 | orchestrator | 2025-09-20 10:27:31 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:34.076852 | orchestrator | 2025-09-20 10:27:34 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:36.339418 | orchestrator | 2025-09-20 10:27:36 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:38.611283 | orchestrator | 2025-09-20 10:27:38 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:40.865794 | orchestrator | 2025-09-20 10:27:40 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) is still in progress 2025-09-20 10:27:43.123343 | orchestrator | 2025-09-20 10:27:43 | INFO  | Live migration of c4371d31-25d4-47f4-b4b5-37694729c7e7 (test-3) completed with status ACTIVE 2025-09-20 10:27:43.123471 | orchestrator | 2025-09-20 10:27:43 | INFO  | Live migrating server 8b8ac489-87a8-4ced-92df-852886e70a68 2025-09-20 10:27:53.383610 | orchestrator | 2025-09-20 10:27:53 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:27:55.755761 | orchestrator | 2025-09-20 10:27:55 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:27:58.125098 | orchestrator | 2025-09-20 10:27:58 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:28:00.420027 | orchestrator | 2025-09-20 10:28:00 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:28:02.676027 | orchestrator | 2025-09-20 10:28:02 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:28:04.976657 | orchestrator | 2025-09-20 10:28:04 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:28:07.335980 | orchestrator | 2025-09-20 10:28:07 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:28:09.597660 | orchestrator | 2025-09-20 10:28:09 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) is still in progress 2025-09-20 10:28:11.937571 | orchestrator | 2025-09-20 10:28:11 | INFO  | Live migration of 8b8ac489-87a8-4ced-92df-852886e70a68 (test-2) completed with status ACTIVE 2025-09-20 10:28:11.937711 | orchestrator | 2025-09-20 10:28:11 | INFO  | Live migrating server 050aef8b-1719-4a79-91d8-0464c6c04ca8 2025-09-20 10:28:22.138334 | orchestrator | 2025-09-20 10:28:22 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:24.501966 | orchestrator | 2025-09-20 10:28:24 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:26.874579 | orchestrator | 2025-09-20 10:28:26 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:29.182680 | orchestrator | 2025-09-20 10:28:29 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:31.452449 | orchestrator | 2025-09-20 10:28:31 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:33.836687 | orchestrator | 2025-09-20 10:28:33 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:36.153938 | orchestrator | 2025-09-20 10:28:36 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:38.446547 | orchestrator | 2025-09-20 10:28:38 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:40.696642 | orchestrator | 2025-09-20 10:28:40 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) is still in progress 2025-09-20 10:28:42.976123 | orchestrator | 2025-09-20 10:28:42 | INFO  | Live migration of 050aef8b-1719-4a79-91d8-0464c6c04ca8 (test-1) completed with status ACTIVE 2025-09-20 10:28:42.976274 | orchestrator | 2025-09-20 10:28:42 | INFO  | Live migrating server 944d89d7-2d5d-4298-96e8-8a1d6cf98856 2025-09-20 10:28:53.322330 | orchestrator | 2025-09-20 10:28:53 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:28:55.650416 | orchestrator | 2025-09-20 10:28:55 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:28:57.995427 | orchestrator | 2025-09-20 10:28:57 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:00.354292 | orchestrator | 2025-09-20 10:29:00 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:02.644563 | orchestrator | 2025-09-20 10:29:02 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:04.918352 | orchestrator | 2025-09-20 10:29:04 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:07.273426 | orchestrator | 2025-09-20 10:29:07 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:09.585513 | orchestrator | 2025-09-20 10:29:09 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:11.912818 | orchestrator | 2025-09-20 10:29:11 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:14.265247 | orchestrator | 2025-09-20 10:29:14 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) is still in progress 2025-09-20 10:29:16.631210 | orchestrator | 2025-09-20 10:29:16 | INFO  | Live migration of 944d89d7-2d5d-4298-96e8-8a1d6cf98856 (test) completed with status ACTIVE 2025-09-20 10:29:16.936184 | orchestrator | + compute_list 2025-09-20 10:29:16.936275 | orchestrator | + osism manage compute list testbed-node-3 2025-09-20 10:29:19.709177 | orchestrator | +------+--------+----------+ 2025-09-20 10:29:19.710001 | orchestrator | | ID | Name | Status | 2025-09-20 10:29:19.710109 | orchestrator | |------+--------+----------| 2025-09-20 10:29:19.710122 | orchestrator | +------+--------+----------+ 2025-09-20 10:29:19.912636 | orchestrator | + osism manage compute list testbed-node-4 2025-09-20 10:29:22.524780 | orchestrator | +------+--------+----------+ 2025-09-20 10:29:22.524874 | orchestrator | | ID | Name | Status | 2025-09-20 10:29:22.524889 | orchestrator | |------+--------+----------| 2025-09-20 10:29:22.524900 | orchestrator | +------+--------+----------+ 2025-09-20 10:29:22.741939 | orchestrator | + osism manage compute list testbed-node-5 2025-09-20 10:29:25.674608 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:29:25.674696 | orchestrator | | ID | Name | Status | 2025-09-20 10:29:25.674716 | orchestrator | |--------------------------------------+--------+----------| 2025-09-20 10:29:25.674733 | orchestrator | | 1a8d026f-e2b8-4cac-ade3-bf901e93700d | test-4 | ACTIVE | 2025-09-20 10:29:25.674751 | orchestrator | | c4371d31-25d4-47f4-b4b5-37694729c7e7 | test-3 | ACTIVE | 2025-09-20 10:29:25.674767 | orchestrator | | 8b8ac489-87a8-4ced-92df-852886e70a68 | test-2 | ACTIVE | 2025-09-20 10:29:25.674785 | orchestrator | | 050aef8b-1719-4a79-91d8-0464c6c04ca8 | test-1 | ACTIVE | 2025-09-20 10:29:25.674802 | orchestrator | | 944d89d7-2d5d-4298-96e8-8a1d6cf98856 | test | ACTIVE | 2025-09-20 10:29:25.674819 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-20 10:29:25.876431 | orchestrator | + server_ping 2025-09-20 10:29:25.877304 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-20 10:29:25.877336 | orchestrator | ++ tr -d '\r' 2025-09-20 10:29:28.471440 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:29:28.471569 | orchestrator | + ping -c3 192.168.112.174 2025-09-20 10:29:28.482004 | orchestrator | PING 192.168.112.174 (192.168.112.174) 56(84) bytes of data. 2025-09-20 10:29:28.482127 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=1 ttl=63 time=9.51 ms 2025-09-20 10:29:29.477193 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=2 ttl=63 time=2.44 ms 2025-09-20 10:29:30.479391 | orchestrator | 64 bytes from 192.168.112.174: icmp_seq=3 ttl=63 time=2.89 ms 2025-09-20 10:29:30.479498 | orchestrator | 2025-09-20 10:29:30.479516 | orchestrator | --- 192.168.112.174 ping statistics --- 2025-09-20 10:29:30.479529 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:29:30.479541 | orchestrator | rtt min/avg/max/mdev = 2.442/4.945/9.506/3.230 ms 2025-09-20 10:29:30.479711 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:29:30.479816 | orchestrator | + ping -c3 192.168.112.127 2025-09-20 10:29:30.489619 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2025-09-20 10:29:30.489687 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.47 ms 2025-09-20 10:29:31.487018 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.64 ms 2025-09-20 10:29:32.488498 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=2.17 ms 2025-09-20 10:29:32.488623 | orchestrator | 2025-09-20 10:29:32.488652 | orchestrator | --- 192.168.112.127 ping statistics --- 2025-09-20 10:29:32.488673 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:29:32.488687 | orchestrator | rtt min/avg/max/mdev = 2.173/4.095/7.470/2.393 ms 2025-09-20 10:29:32.488699 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:29:32.488711 | orchestrator | + ping -c3 192.168.112.110 2025-09-20 10:29:32.500739 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-09-20 10:29:32.500826 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=7.41 ms 2025-09-20 10:29:33.497788 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.39 ms 2025-09-20 10:29:34.499808 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.12 ms 2025-09-20 10:29:34.499942 | orchestrator | 2025-09-20 10:29:34.499972 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-09-20 10:29:34.499995 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-20 10:29:34.500054 | orchestrator | rtt min/avg/max/mdev = 2.119/3.974/7.413/2.434 ms 2025-09-20 10:29:34.500116 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:29:34.500135 | orchestrator | + ping -c3 192.168.112.175 2025-09-20 10:29:34.510360 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-09-20 10:29:34.510454 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=7.11 ms 2025-09-20 10:29:35.507194 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.60 ms 2025-09-20 10:29:36.508793 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=2.07 ms 2025-09-20 10:29:36.508908 | orchestrator | 2025-09-20 10:29:36.508922 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-09-20 10:29:36.508933 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:29:36.508942 | orchestrator | rtt min/avg/max/mdev = 2.067/3.924/7.112/2.264 ms 2025-09-20 10:29:36.509198 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-20 10:29:36.509656 | orchestrator | + ping -c3 192.168.112.179 2025-09-20 10:29:36.523886 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2025-09-20 10:29:36.523943 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=9.55 ms 2025-09-20 10:29:37.518639 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.62 ms 2025-09-20 10:29:38.520417 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.45 ms 2025-09-20 10:29:38.520521 | orchestrator | 2025-09-20 10:29:38.520538 | orchestrator | --- 192.168.112.179 ping statistics --- 2025-09-20 10:29:38.520550 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-20 10:29:38.520562 | orchestrator | rtt min/avg/max/mdev = 2.449/4.871/9.545/3.305 ms 2025-09-20 10:29:38.940968 | orchestrator | ok: Runtime: 0:20:57.649880 2025-09-20 10:29:38.994271 | 2025-09-20 10:29:38.994390 | TASK [Run tempest] 2025-09-20 10:29:39.529717 | orchestrator | skipping: Conditional result was False 2025-09-20 10:29:39.547186 | 2025-09-20 10:29:39.547335 | TASK [Check prometheus alert status] 2025-09-20 10:29:40.082139 | orchestrator | skipping: Conditional result was False 2025-09-20 10:29:40.085199 | 2025-09-20 10:29:40.085373 | PLAY RECAP 2025-09-20 10:29:40.085514 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-20 10:29:40.085619 | 2025-09-20 10:29:40.307808 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-20 10:29:40.308919 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-20 10:29:41.038895 | 2025-09-20 10:29:41.039059 | PLAY [Post output play] 2025-09-20 10:29:41.056319 | 2025-09-20 10:29:41.056454 | LOOP [stage-output : Register sources] 2025-09-20 10:29:41.131695 | 2025-09-20 10:29:41.132004 | TASK [stage-output : Check sudo] 2025-09-20 10:29:41.946988 | orchestrator | sudo: a password is required 2025-09-20 10:29:42.171502 | orchestrator | ok: Runtime: 0:00:00.014293 2025-09-20 10:29:42.187094 | 2025-09-20 10:29:42.187251 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-20 10:29:42.225392 | 2025-09-20 10:29:42.225674 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-20 10:29:42.294769 | orchestrator | ok 2025-09-20 10:29:42.303400 | 2025-09-20 10:29:42.303517 | LOOP [stage-output : Ensure target folders exist] 2025-09-20 10:29:42.749725 | orchestrator | ok: "docs" 2025-09-20 10:29:42.750095 | 2025-09-20 10:29:42.988209 | orchestrator | ok: "artifacts" 2025-09-20 10:29:43.221204 | orchestrator | ok: "logs" 2025-09-20 10:29:43.238089 | 2025-09-20 10:29:43.238259 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-20 10:29:43.273151 | 2025-09-20 10:29:43.273411 | TASK [stage-output : Make all log files readable] 2025-09-20 10:29:43.554048 | orchestrator | ok 2025-09-20 10:29:43.563295 | 2025-09-20 10:29:43.563425 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-20 10:29:43.598293 | orchestrator | skipping: Conditional result was False 2025-09-20 10:29:43.615487 | 2025-09-20 10:29:43.615684 | TASK [stage-output : Discover log files for compression] 2025-09-20 10:29:43.639937 | orchestrator | skipping: Conditional result was False 2025-09-20 10:29:43.654044 | 2025-09-20 10:29:43.654190 | LOOP [stage-output : Archive everything from logs] 2025-09-20 10:29:43.708319 | 2025-09-20 10:29:43.708483 | PLAY [Post cleanup play] 2025-09-20 10:29:43.716716 | 2025-09-20 10:29:43.716819 | TASK [Set cloud fact (Zuul deployment)] 2025-09-20 10:29:43.779948 | orchestrator | ok 2025-09-20 10:29:43.790609 | 2025-09-20 10:29:43.790719 | TASK [Set cloud fact (local deployment)] 2025-09-20 10:29:43.824409 | orchestrator | skipping: Conditional result was False 2025-09-20 10:29:43.838724 | 2025-09-20 10:29:43.838886 | TASK [Clean the cloud environment] 2025-09-20 10:29:44.798821 | orchestrator | 2025-09-20 10:29:44 - clean up servers 2025-09-20 10:29:45.560852 | orchestrator | 2025-09-20 10:29:45 - testbed-manager 2025-09-20 10:29:45.649595 | orchestrator | 2025-09-20 10:29:45 - testbed-node-3 2025-09-20 10:29:45.741761 | orchestrator | 2025-09-20 10:29:45 - testbed-node-1 2025-09-20 10:29:45.839281 | orchestrator | 2025-09-20 10:29:45 - testbed-node-5 2025-09-20 10:29:45.935383 | orchestrator | 2025-09-20 10:29:45 - testbed-node-4 2025-09-20 10:29:46.034576 | orchestrator | 2025-09-20 10:29:46 - testbed-node-2 2025-09-20 10:29:46.128135 | orchestrator | 2025-09-20 10:29:46 - testbed-node-0 2025-09-20 10:29:46.210207 | orchestrator | 2025-09-20 10:29:46 - clean up keypairs 2025-09-20 10:29:46.229952 | orchestrator | 2025-09-20 10:29:46 - testbed 2025-09-20 10:29:46.255322 | orchestrator | 2025-09-20 10:29:46 - wait for servers to be gone 2025-09-20 10:29:57.077498 | orchestrator | 2025-09-20 10:29:57 - clean up ports 2025-09-20 10:29:57.253304 | orchestrator | 2025-09-20 10:29:57 - 44560a76-94db-40ba-8e8a-4975761cef61 2025-09-20 10:29:57.946552 | orchestrator | 2025-09-20 10:29:57 - 716e2b10-c45a-44a5-8830-cd5a8925834b 2025-09-20 10:29:58.440265 | orchestrator | 2025-09-20 10:29:58 - 9506956f-9255-40ff-b35c-2d63708bf466 2025-09-20 10:29:58.648116 | orchestrator | 2025-09-20 10:29:58 - a386892f-53cc-4b77-a168-2c6b740cc10f 2025-09-20 10:29:58.849252 | orchestrator | 2025-09-20 10:29:58 - c64d4be8-0793-4d41-9878-7b3ef03c75fb 2025-09-20 10:29:59.045898 | orchestrator | 2025-09-20 10:29:59 - e45e7d6e-68a8-4ba3-ac90-6abf23822761 2025-09-20 10:29:59.255480 | orchestrator | 2025-09-20 10:29:59 - f38d0291-7dd1-4130-8edb-a26031c2f4a8 2025-09-20 10:29:59.461438 | orchestrator | 2025-09-20 10:29:59 - clean up volumes 2025-09-20 10:29:59.572173 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-5-node-base 2025-09-20 10:29:59.613877 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-2-node-base 2025-09-20 10:29:59.654966 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-1-node-base 2025-09-20 10:29:59.694876 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-3-node-base 2025-09-20 10:29:59.737661 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-4-node-base 2025-09-20 10:29:59.778690 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-0-node-base 2025-09-20 10:29:59.821016 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-manager-base 2025-09-20 10:29:59.862770 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-3-node-3 2025-09-20 10:29:59.904302 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-0-node-3 2025-09-20 10:29:59.944656 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-4-node-4 2025-09-20 10:29:59.987380 | orchestrator | 2025-09-20 10:29:59 - testbed-volume-1-node-4 2025-09-20 10:30:00.028135 | orchestrator | 2025-09-20 10:30:00 - testbed-volume-5-node-5 2025-09-20 10:30:00.071253 | orchestrator | 2025-09-20 10:30:00 - testbed-volume-7-node-4 2025-09-20 10:30:00.114428 | orchestrator | 2025-09-20 10:30:00 - testbed-volume-2-node-5 2025-09-20 10:30:00.155200 | orchestrator | 2025-09-20 10:30:00 - testbed-volume-6-node-3 2025-09-20 10:30:00.193366 | orchestrator | 2025-09-20 10:30:00 - testbed-volume-8-node-5 2025-09-20 10:30:00.270326 | orchestrator | 2025-09-20 10:30:00 - disconnect routers 2025-09-20 10:30:00.396824 | orchestrator | 2025-09-20 10:30:00 - testbed 2025-09-20 10:30:01.246920 | orchestrator | 2025-09-20 10:30:01 - clean up subnets 2025-09-20 10:30:01.298577 | orchestrator | 2025-09-20 10:30:01 - subnet-testbed-management 2025-09-20 10:30:01.455420 | orchestrator | 2025-09-20 10:30:01 - clean up networks 2025-09-20 10:30:01.624980 | orchestrator | 2025-09-20 10:30:01 - net-testbed-management 2025-09-20 10:30:01.923774 | orchestrator | 2025-09-20 10:30:01 - clean up security groups 2025-09-20 10:30:01.965129 | orchestrator | 2025-09-20 10:30:01 - testbed-node 2025-09-20 10:30:02.075248 | orchestrator | 2025-09-20 10:30:02 - testbed-management 2025-09-20 10:30:02.202107 | orchestrator | 2025-09-20 10:30:02 - clean up floating ips 2025-09-20 10:30:02.240706 | orchestrator | 2025-09-20 10:30:02 - 81.163.193.35 2025-09-20 10:30:02.610793 | orchestrator | 2025-09-20 10:30:02 - clean up routers 2025-09-20 10:30:02.709245 | orchestrator | 2025-09-20 10:30:02 - testbed 2025-09-20 10:30:03.898742 | orchestrator | ok: Runtime: 0:00:19.472142 2025-09-20 10:30:03.902745 | 2025-09-20 10:30:03.902945 | PLAY RECAP 2025-09-20 10:30:03.903076 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-20 10:30:03.903139 | 2025-09-20 10:30:04.048850 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-20 10:30:04.051626 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-20 10:30:04.782373 | 2025-09-20 10:30:04.782502 | PLAY [Cleanup play] 2025-09-20 10:30:04.796765 | 2025-09-20 10:30:04.796879 | TASK [Set cloud fact (Zuul deployment)] 2025-09-20 10:30:04.846076 | orchestrator | ok 2025-09-20 10:30:04.852506 | 2025-09-20 10:30:04.852629 | TASK [Set cloud fact (local deployment)] 2025-09-20 10:30:04.876459 | orchestrator | skipping: Conditional result was False 2025-09-20 10:30:04.885525 | 2025-09-20 10:30:04.885847 | TASK [Clean the cloud environment] 2025-09-20 10:30:05.954567 | orchestrator | 2025-09-20 10:30:05 - clean up servers 2025-09-20 10:30:06.476688 | orchestrator | 2025-09-20 10:30:06 - clean up keypairs 2025-09-20 10:30:06.493931 | orchestrator | 2025-09-20 10:30:06 - wait for servers to be gone 2025-09-20 10:30:06.533776 | orchestrator | 2025-09-20 10:30:06 - clean up ports 2025-09-20 10:30:06.614201 | orchestrator | 2025-09-20 10:30:06 - clean up volumes 2025-09-20 10:30:06.672561 | orchestrator | 2025-09-20 10:30:06 - disconnect routers 2025-09-20 10:30:06.710345 | orchestrator | 2025-09-20 10:30:06 - clean up subnets 2025-09-20 10:30:06.731562 | orchestrator | 2025-09-20 10:30:06 - clean up networks 2025-09-20 10:30:06.884879 | orchestrator | 2025-09-20 10:30:06 - clean up security groups 2025-09-20 10:30:06.925437 | orchestrator | 2025-09-20 10:30:06 - clean up floating ips 2025-09-20 10:30:06.949327 | orchestrator | 2025-09-20 10:30:06 - clean up routers 2025-09-20 10:30:07.436785 | orchestrator | ok: Runtime: 0:00:01.335090 2025-09-20 10:30:07.440241 | 2025-09-20 10:30:07.440379 | PLAY RECAP 2025-09-20 10:30:07.440494 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-20 10:30:07.440580 | 2025-09-20 10:30:07.534571 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-20 10:30:07.537317 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-20 10:30:08.240866 | 2025-09-20 10:30:08.240987 | PLAY [Base post-fetch] 2025-09-20 10:30:08.254489 | 2025-09-20 10:30:08.254602 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-20 10:30:08.308367 | orchestrator | skipping: Conditional result was False 2025-09-20 10:30:08.314460 | 2025-09-20 10:30:08.314577 | TASK [fetch-output : Set log path for single node] 2025-09-20 10:30:08.342614 | orchestrator | ok 2025-09-20 10:30:08.348333 | 2025-09-20 10:30:08.348415 | LOOP [fetch-output : Ensure local output dirs] 2025-09-20 10:30:08.756859 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/work/logs" 2025-09-20 10:30:09.008204 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/work/artifacts" 2025-09-20 10:30:09.260149 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c03789dedb7743b89e40ae39dfe93df5/work/docs" 2025-09-20 10:30:09.275945 | 2025-09-20 10:30:09.276062 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-20 10:30:10.214061 | orchestrator | changed: .d..t...... ./ 2025-09-20 10:30:10.214400 | orchestrator | changed: All items complete 2025-09-20 10:30:10.214464 | 2025-09-20 10:30:10.920690 | orchestrator | changed: .d..t...... ./ 2025-09-20 10:30:11.649166 | orchestrator | changed: .d..t...... ./ 2025-09-20 10:30:11.688866 | 2025-09-20 10:30:11.689014 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-20 10:30:11.729038 | orchestrator | skipping: Conditional result was False 2025-09-20 10:30:11.732649 | orchestrator | skipping: Conditional result was False 2025-09-20 10:30:11.749433 | 2025-09-20 10:30:11.749595 | PLAY RECAP 2025-09-20 10:30:11.749681 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-20 10:30:11.749725 | 2025-09-20 10:30:11.876237 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-20 10:30:11.878522 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-20 10:30:12.607372 | 2025-09-20 10:30:12.607567 | PLAY [Base post] 2025-09-20 10:30:12.621993 | 2025-09-20 10:30:12.622127 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-20 10:30:13.506636 | orchestrator | changed 2025-09-20 10:30:13.515640 | 2025-09-20 10:30:13.515761 | PLAY RECAP 2025-09-20 10:30:13.515839 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-20 10:30:13.515911 | 2025-09-20 10:30:13.642815 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-20 10:30:13.644136 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-20 10:30:14.456840 | 2025-09-20 10:30:14.457948 | PLAY [Base post-logs] 2025-09-20 10:30:14.477913 | 2025-09-20 10:30:14.478053 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-20 10:30:14.940955 | localhost | changed 2025-09-20 10:30:14.951009 | 2025-09-20 10:30:14.951155 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-20 10:30:14.987306 | localhost | ok 2025-09-20 10:30:14.991139 | 2025-09-20 10:30:14.991258 | TASK [Set zuul-log-path fact] 2025-09-20 10:30:15.006816 | localhost | ok 2025-09-20 10:30:15.016354 | 2025-09-20 10:30:15.016471 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-20 10:30:15.042868 | localhost | ok 2025-09-20 10:30:15.046780 | 2025-09-20 10:30:15.046947 | TASK [upload-logs : Create log directories] 2025-09-20 10:30:15.545046 | localhost | changed 2025-09-20 10:30:15.550644 | 2025-09-20 10:30:15.550823 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-20 10:30:16.054454 | localhost -> localhost | ok: Runtime: 0:00:00.006246 2025-09-20 10:30:16.063172 | 2025-09-20 10:30:16.063360 | TASK [upload-logs : Upload logs to log server] 2025-09-20 10:30:16.613258 | localhost | Output suppressed because no_log was given 2025-09-20 10:30:16.617005 | 2025-09-20 10:30:16.617182 | LOOP [upload-logs : Compress console log and json output] 2025-09-20 10:30:16.684278 | localhost | skipping: Conditional result was False 2025-09-20 10:30:16.690583 | localhost | skipping: Conditional result was False 2025-09-20 10:30:16.701702 | 2025-09-20 10:30:16.701949 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-20 10:30:16.757704 | localhost | skipping: Conditional result was False 2025-09-20 10:30:16.758370 | 2025-09-20 10:30:16.761731 | localhost | skipping: Conditional result was False 2025-09-20 10:30:16.775242 | 2025-09-20 10:30:16.775436 | LOOP [upload-logs : Upload console log and json output]